CRL Talks

Past Talks

2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999


The Role of Working Memory in Language Representation and Processing

Robert Kluender UC San Diego

+ more

The current consensus across cognitive domains is that working memory may be a mere epiphenomenon arising from the attention system acting upon long-term memory representations. This idea has serious consequences for both the nature of the human lexicon as well for the nature of syntactic representation and processing. Likewise, the psychological reality of storage and/or maintenance functions (including capacity limitations), long considered a primary explanandum of the verbal working memory literature, has more recently been challenged and subjected to re-evaluation.

In this talk I present a brief overview of theoretical proposals regarding the general nature of working memory across cognitive domains, show how the language ERP literature on long-distance dependencies can successfully be recast in terms of encoding and retrieval to the exclusion of storage operations, and then turn to recent research in the visual working memory literature that presents intriguing parallels to issues prevalent in the verbal working memory literature.

I pose a number of questions for discussion that it seems to me need to be addressed going forward (to which – truth in advertising – I do not necessarily have any good answers): If working memory truly reduces to the focus of attention operating over long-term memory representations, then what is the exact nature of the linguistic representations in long-term memory over which the attention system operates in sentence processing? What is the nature of lexical entries in long-term memory and how are they assembled on line in any plausible fashion into syntactic representations? Feature representations are crucial to visual working memory paradigms for obvious reasons, and similarity-based interference models of verbal working memory likewise suggest that retrieval is primarily feature-driven. If this is true, what constraints does this fact impose on syntactic representations? To what extent do existing syntactic theories satisfy these constraints, and could such constraints be used as metrics to differentiate and possibly evaluate theories of syntactic representation against each other?


Description of visual scenes as well as sentence comprehension, using the Schema Architecture Language-Vision InterAction (SALVIA) cognitive model

Victor Barres USC

+ more

The Schema Architecture Language-Vision InterAction (SALVIA) is a cognitive level model of the dynamic and incremental interactions that take place between the visuo-attentional system and the language production system. By simulating the production of scene descriptions, SALVIA provides an explicit framework to study the coordinated distributed processes that support visual scene apprehension, conceptualization, and grammatical processing leading to utterance formulation. In this presentation I will focus on how SALVIA reframes the psycholinguistic debate regarding the relations between gaze patterns and utterance forms, moving away from a dichotomy between serial modular (Griffin et al. 2000) and interactive views (Gleitman et al. 2007). By modeling simultaneously the impact on the system's dynamics of the type of scene and of the task temporal requirements, the two views become two key points embedded in the more general model’s behavioral space. I will show how this can be shown using the controversial case of the impact of attention capture manipulations, within the Visual World Paradigm experiments, on utterance structure. On the way, I will show, as a preliminary but necessary result, how SALVIA models the impact of time pressure on the quality of utterances produced (measured by their structural compactness and grammatical complexity). As time permits, and to insist both on the necessity to move from a cognitive to a neurocognitive model, as well as on the necessity to move beyond one-sided models of our language cognitive apparatus, I will discuss how SALVIA is extended into a model of language comprehension with this time the additional constraints of simulating key neuropsychology data points (with a focus on agrammatism).


Interactive Communicative Inference

Larry Muhlstein University of Chicago

+ more

In the search for an understanding of human communication, researchers often try to isolate listener and speaker roles and study them separately. Others claim that it is the intertwinedness of these roles that makes human communication special. This close relationship between listener and speaker has been characterized by concepts such as common ground, backchanneling, and alignment, but they are only part of the picture. Underlying all of these processes, there must be a mechanism that we use to make inferences about our interlocutors’ understanding of words and gestures that allows us to communicate robustly without assuming that we all take the same words to have the same meaning. In this talk, I explore this relationship between language and concepts and propose a mechanism through which communicative interaction can facilitate these latent conceptual inferences. I argue that using this mechanism to augment our understanding of human communication paves the way for a more precise account of the role of interaction in communication.


Individual Differences in Children's Learning of Measurement and Chemical Earth Science Concepts

Nancy Stein University of Chicago

+ more

Math and science concepts can be broken down into those that require conceptual understanding without engaging in mathematical calculations and those that require explicit understanding of numerical operations. Both types of knowledge are critical, but success in them is predicted by different cognitive aptitudes. Four different studies are reported, where a total of 420 4th graders were assessed on digit span, spatial ability, and vocabulary-verbal comprehension to explore the role these skills played in the acquisition of mathematical measurement and chemical earth science concepts. Results showed that digit span predicted success on any item requiring numerical processing (correlation between success on measurement items and digit span, r = 0.69). Any concept not requiring numerical operations correlated with digit span at the r = 0.21 level. The more significant correlation for scientific conceptual understanding was with vocabulary and verbal comprehension, correlated at the r = 0.49 level. Digit span not only predicted performance on numerical items, but also predicted the number of repetitions needed to acquire accurate knowledge of multiplication. Similar findings emerged for accurate learning of conceptual content in relation to vocabulary comprehension. Spatial reasoning was not significantly related to either type of item success.


Language learning, language use, and the evolution of linguistic structure

Kenny Smith University of Edinburgh

+ more

Language is a product of learning in individuals, and universal structural features of language presumably reflect properties of the way in which we learn. But language is not necessarily a direct reflection of properties of individual learners: languages are culturally-transmitted systems, which persist in populations via a repeated cycle of learning and use, where learners learn from linguistic data which represents the communicative behaviour of other individuals who learnt their language in the same way. Languages evolve as a result of their cultural transmission, and are therefore the product of a potentially complex interplay between the biases of human language learners, the communicative functions which language serves, and the ways in which languages are transmitted in populations. In this talk I will present a series of experiments, based around artificial language learning, dyadic interaction and iterated learning paradigms, which allow us to explore the relationship between learning and culture in shaping linguistic structure; I will finish with an experimental study looking at cultural evolution in non-human primates, which suggests that systematic structure may be an inevitable outcome of cultural transmission, rather than a reflection of uniquely human learning biases.


Metaphor & Emotion: Frames for Dealing with Hardship

Rose Hendricks UC San Diego

+ more

Do metaphors shape people’s emotional states and beliefs about dealing with adversity? Recovery from cancer is one hardship that many people face, and it can be mediated by the way people think about it. We investigate whether two common metaphors for describing a cancer experience – the battle and the journey – encourage people to make different inferences about the patient’s emotional state. I'll also share work looking at the language that people produce after encountering these metaphors, using it as a window into the mental models they construct and the ways they communicate metaphor-laden emotional information. This line of work is still in early stages, so I look forward to your insightful feedback!


A Neurocomputational Model of the N400 and the P600 in Language Comprehension

Harm Brouwer Saarland University

+ more

Ten years ago, researchers using event-related brain potentials (ERPs) to study language comprehension were puzzled by what looked like a Semantic Illusion: Semantically anomalous, but structurally well-formed sentences did not affect the N400 component — traditionally taken to reflect semantic integration — but instead produced a P600 effect, which is generally linked to syntactic processing. This "Semantic P600"-effect led to a considerable amount of debate, and a number of complex processing models have been proposed as an explanation. What these models have in common is that they postulate two or more separate processing streams, in order to reconcile the Semantic Illusion and other semantically induced P600 effects with the traditional interpretations of the N400 and the P600. In this talk, we will challenge these multi-stream models, and derive a simpler single-steam model, according to which the N400 component reflects the retrieval of word meaning from semantic memory, and the P600 component indexes the integration of this meaning into the unfolding utterance interpretation. We will then instantiate this "Retrieval–Integration (RI)" account as an explicit neurocomputatonal model. This neurocomputational model is the first to successfully simulate N400 and P600 amplitude in language comprehension, and simulations with the model show that it captures N400 and P600 modulations for a wide spectrum of signature processing phenomena, including semantic anomaly, semantic expectancy, syntactic violations, garden-paths, and crucially, constructions evoking a "Semantic P600"-effect.


A model of Event Knowledge

Jeff Elman UC San Diego

+ more

It has long been recognized that our knowledge of events and situations in the world plays a critical role in our ability to plan our own action and to understand and anticipate the actions of others. This knowledge also provides us with useful data for learning about causal relations in the world. What has not been clear is what the form and structure of this knowledge is, how it is learned, and how it is deployed in real-time. Despite many important theoretical proposals, often using different terminology – schema, scripts, frames, situation models, event knowledge, among others – the lack of a model that addresses these three questions (the form, learning, and deployment of such knowledge) has proved elusive. In this talk I present a connectionist model of event knowledge developed by Ken McRae and myself that attempts to fill this gap. The model simulates a wide range of behaviors that have been observed in humans and seen as reflecting the use of event knowledge. The model also makes testable predictions about behaviors not hitherto observed. The model exhibits a flexibility and robustness in the face of novel situations that resembles that seen in humans. Most importantly, the model’s ability to learn event structure from experience, without prior stipulation, suggests a novel answer to the question ‘What is the form and representation of event knowledge?’


How much grammar does it take to use a noun? Syntactic effects in bare-noun production and comprehension

Nicholas Lester UCSB Linguistics

+ more

Many psycholinguistic paradigms investigate lexical processing using stimuli or techniques that target single words (lexical decision, picture naming, word naming, etc.). The fruit of this research is offered to explain the structure and flow of information within the mental lexicon. Understandably, these studies do not usually concern themselves with syntax beyond a small set lexical categories (with some empirical support; e.g., La Heij et al., 1998). However, several studies have recently suggested that syntactic information is obligatorily accessed during the processing of individual words (e.g., Baayen, et al., 2011; Cubelli, et al., 2005). These studies have likewise focused on categorical information (e.g., part-of-speech, gender, count/mass), though some recent work has explored lexical variability within a single phrasal construction (e.g., frequency distribution of prepositions across target noun within prepositional phrases). Going further, linguistic theory suggests that distributions across syntactic constructions may also play a role (combinatoric potential; e.g., Branigan & Pickering, 1998). Psycholinguistic support for this notion comes from research on morphosyntactic distributions. For example, Serbian words, which are inflected for grammatical role (among other things), are recognized faster to the extent that (a) they approach uniform probability distribution across case inflections (Moscoso del Prado Martín et al., 2004) and (b) they approach the prototypical probability distribution for words in their inflectional class (Milin et al., 2009).
In this talk, I provide evidence for a new, fully generalized syntactic effect in English lexical processing. I introduce several novel information-theoretic measures of syntactic diversity. These measures tap into both hierarchical asymmetries (heads vs. dependents) and word order. I correlate these measures with response times in several tasks, including picture naming, word naming, and visual lexical decision. Results suggest that syntax supports the processing of individual nouns in both production and comprehension, with a caveat: processing modalities may be tuned to different features of the syntactic distributions. Implications for representational and functional architecture are discussed. So, how much grammar does it take to use a noun? A lot.


A Neurocomputational Model of Surprisal in Language Comprehension

Matthew Crocker Saarland University

+ more

Surprisal Theory (Hale, 2001; Levy, 2008) asserts that the processing difficulty incurred by a word is inversely proportional to its expectancy – or 'surprisal' – as estimated by probabilistic language models. Such models are limited, however, in that they assume expectancy is determined by linguistic experience alone, making it difficult to accommodate the influence of world, and situational knowledge. To address this limitation, we have developed a neurocomputational model of language processing that seamlessly integrates linguistic experience and probabilistic world knowledge in online comprehension. The model is a simple recurrent network (SRN: Elman, 1990) that is trained to map sentences onto rich probabilistic meaning representations that are derived from a Distributed Situation-state Space (DSS: Frank et al, 2003). Crucially, our DSS representations allow for the computation of online surprisal based on the likelihood of the sentence meaning for the just processed word, given the sentence meaning up until before the word was encountered. We then demonstrate that our `meaning-centric’ characterisation of surprisal provides a more general index of the effort involved in mapping from the linguistic signal to rich and knowledge-driven situation models – capturing not only established surprisal phenomena reflecting linguistic experience, but also offering the potential for surprisal-based explanations for a range of findings that have demonstrated the importance of knowledge-, discourse-, and script-driven influences on processing difficulty.


Language comprehension in rich visual contexts: combining eye tracking and EEG

Pia Knoeferle Humboldt University Berlin

+ more

Listeners' eye movements to objects during spoken language comprehension have provided good evidence for the view that information from a non-linguistic visual context can rapidly affect syntactic structuring, and this evidence has shaped theories of language comprehension. From this research we have further learnt that the time course of eye movements can reflect distinct comprehension processes (e.g., visual attention to objects is slowed for non-canonical relative to canonical sentence structures). Good evidence that visual context affects distinct syntactic disambiguation and lexical-semantic processes has come, moreover from the analysis of event-related brain potentials (ERPs). However, not all visual context effects seem to tap into distinct comprehension processes (e.g., incongruence between different spatial object depictions and an ensuing sentence results in the same ERP pattern). The present talk reviews the literature on visually situated language comprehension and with a view to future research, I will further outline what theoretically interesting insights we might gain by jointly recording eye-tracking and event-related brain potentials during visually situated language comprehension.


Aligning generation and parsing

Shota Momma UCSD Psychology

+ more

We use our grammatical knowledge in more than one way. On one hand, we use our grammatical knowledge to speak what we want to say. On the other hand, we use our grammatical knowledge to comprehend what others are saying. In either case, we need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same system. The potential existence of two independent systems of structure building underlying speaking and understanding doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. In this talk, I will discuss whether it is possible to design a single system that does structure building in comprehension, i.e., parsing, and structure building in production, i.e., generation, so that the linking theory between knowledge and performance can also be unified into one. I will discuss both existing and new experimental data pertaining to how sentence structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is plausible.


Biology and culture in the evolution of rhythm

Andrea Ravignani Vrije Universiteit Brussel

+ more

Many human behaviours, like music and language, show structural regularities, some shared across all cultures and traditions. Why musical universals exist has been object of theoretical speculation but received little empirical attention. Here, by focusing on rhythm, we test the mechanisms underlying musical universals. Human participants are asked to imitate sets of randomly generated drumming sequences, after which their attempts at reproduction become the training set for the next participants in a transmission chain. The structure of drumming patterns, transmitted in independent chains of participants across cultural generations, “evolves” adapting to human biology and cognition. Drumming patterns transmitted within cultures develop into rhythms which are easier to learn, distinctive for each experimental cultural tradition and characterized by all six universals found among world music. Rhythmic structure hence emerges from repeated enhancement of features that adapt to be easily perceived, imitated and transmitted within a culture.


Beyond brilliant babies and rapid acquisition:
Protracted perceptual learning better explains spoken language development

Sarah Creel UCSD Cognitive Science

+ more

Approaches to lower level language development – sounds and words – have typically focused on the first year of life and shortly beyond. However, the emphasis on rapidity of learning and early sensitivity has yielded only a partial picture of development. New technical and experimental advances have led to a reconceptualization of language development that emphasizes protracted processes of perceptual and associative learning during development, which may undergird more rapid real-time processes that cope with the ambiguity of language in the moment. This dramatic shift in perspective is more than a debate about the speed of learning: by moving to a view of speech and lexical development that extends considerably outside of infancy, new developmental factors—vocabulary, reading, speech production, and social interaction—may come into play, augmenting simple perceptual learning or statistical mechanisms.
I will present research on young (3- to 7-year-old) children’s recognition of speakers, accents, and affective prosody that together suggest the need for a new theoretical approach to perceptual category learning in the speech signal. Seemingly counter to claims of precocious native-language sound sensitivity in infants, my work suggests steady, incremental increases in children’s processing of indexical and paralinguistic information. Importantly, these “nonspeech” aspects of the speech signal—who is talking and what their affective state is—contribute integrally to full-scale adult language comprehension, and influence children's comprehension to the extent that children can access this information. The picture emerging from my work is that spoken language representations develop via a process of slow distributional learning, in combination with slow encoding of associations between sound patterns (voice properties, accent characteristics) and person knowledge.


Frontal control mechanisms in language production

Stéphanie K. Riès San Diego State University

+ more

Adults fluidly utter 2 to 3 words per second selected from up to 100,000 words in the mental lexicon and only err once every 1000 words. Although seemingly easy, producing language is complex and depends on cognitive control processes that may be shared with non-linguistic cognitive functions. In particular, choosing words cannot be carried out adequately without cognitive control processes. Despite the central importance of our capacity to produce language and the immense personal and societal cost caused by its disruption, the spatio-temporal pattern of activation of the brain regions involved in word selection and the precise role of these brain regions are largely unknown. I will present results from scalp and intracranial electrophysiological studies and neuropsychological studies beginning to shed light on these issues. These results support the hypotheses that posterior inferior left temporal cortex engages in word retrieval as semantic concepts become available. In parallel, medial and left prefrontal cortices tune in with left temporal activity on a trial-by-trial basis, supporting top-down control over interference resolution for word retrieval. Finally, computational modeling of neuropsychological data suggests the left PFC plays a role in the adjustment of the decision threshold for word selection in language production.


On the relation between syntactic theory and sentence processing, and a theory of island phenomena

William Matchin UCSD Linguistics

+ more

There is currently a seemingly intractable gulf among different syntactic theories. Generative syntactic theories in the Minimalist Program are insightful in providing a theory of the objects of language, tree structures. However, Minimalism does a poor job at explaining real-time sentence processing, child language acquisition, and neuroimaging and neuropsychological data. On the contrary, “lexicalist” grammatical theories (e.g., TAG, Construction grammar, Unification) do a much better job at connecting with real-time sentence processing, language acquisition, and neuroscience. However, lexicalist approaches lack insight into the objects of language: where do these stored structures come from, and why do they have the properties that they do? In this talk I propose a reconciliation between the two approaches along the lines of Frank (2002), by positing a Minimalist grammar as a theory of how structures are generated, and TAG as a theory of the use of these structures during sentence production and comprehension. By making these connections more explicit, it is also possible to incorporate recent insights into the nature of working memory during sentence processing in explaining data traditionally covered by the theory of syntax. I argue that this integrated approach provides more successful insight into the nature of island phenomena than extant grammatical and processing accounts.


What do you mean, no? Studies in the development of negation

Roman Feiman UCSD Psychology

+ more

The words "no" and "not" have very abstract meanings -- among other things, they can combine with the meanings of other phrases to change the truth-value of a sentence. That they can do this in combination with very diverse semantic content requires that the other representations all be in some common format -- components in what is sometimes called the Language of Thought. Charting the development of logical words and concepts can play a role in constraining theories of how (and if) this format of representation might emerge.
Despite its abstract meaning, "no" is one of the first words kids say. Does this word carry its truth-functional meaning right away, or is it used in a different way by the youngest children? Arguing that prior studies of production cannot answer this question, I will present a line of research examining children's comprehension of the words "no" and "not". We find that, although they produce "no" at 16 months, children do not begin to understand the logical meaning of both "no" and "not" until after they turn two, nearly a year later. Additional eyetracking studies, looking at the online processing of negation, reveal some of the difficulty in constructing representations of negated content, showing separate semantic and pragmatic components.
Why does it take so long for kids to get the logical meaning of "no" from the time they start saying it, and why do they get the meanings of "no" and "not" at the same time? There are two general possibilities -- either the concept is not available for labeling until 24 months, or the word-to-concept mapping is a particularly hard problem to solve. I'll present some ongoing work that looks to disentangle these factors by comparing typical English-learning toddlers to older children adopted from Russia and China who are learning English for the first time, but have greater conceptual resources.


Innovating a communication system interactively: Negotiation for conventionalization

Ashley Micklos Linguistics and Anthropology, UCLA

+ more

The study I will present demonstrates how interaction – specifically negotiation and repair – can facilitate the emergence, evolution, and conventionalization of a silent gesture communication system (Goldin-Meadow et al, 2008; Schouwstra, 2012). In a modified iterated learning paradigm (Kirby, Cornish, & Smith, 2008), partners communicated noun-verb meanings using only silent gesture. The need to disambiguate similar noun-verb pairs (e.g. “a hammer” and “hammering”) drove these "new" language users to develop a morphology that allowed for quicker processing, easier transmission, and improved accuracy. The specific morphological system that emerged came about through a process of negotiation within the dyad. Negotiation involved reusing elements of prior gestures, even if temporally distant, to communicate a meaning. This is complementary to the same phenomenon that occurs in speech produced over multiple turns (Goodwin, 2013). The face-to-face, contingent interaction of the experiment allows participants to build from one another’s prior gestures as a means of developing systematicity over generations. Transformative operations on prior gestures can emerge through repair as well. Immediate modification on a gesture can involve a reference to the gesture space or a particular element of the gesture. We see examples of this in other-initiated repair sequences (Jefferson, 1974) within the communication game. Over simulated generations, participants modified and systematized prior gestures to conform to emergent conventions in the silent gesture system. By applying a discourse analytic approach to the use of repair in an experimental methodology for language evolution, we are able to determine not only if interaction facilitates the emergence and learnability of a new communication system, but also how interaction affects such a system.


What are “pronoun reversals” on the autism spectrum and beyond?

David Perlmutter Linguistics, UCSD

+ more

Utterances like (1-2) by children on the autism spectrum and others (with translations into adult English in parentheses) exemplify “pronoun reversal” (PR):
(1) You want ride my back. (‘I want to ride on your back.’) 
(2) (At bedtime:) Me cover you Mommy. (‘You cover me, Mommy.’)
Researchers have treated PR as examples of children’s “errors” and “confusion.” They have focused on tabulating the percentage of such “errors” by children in different populations and at different stages of language acquisition.
This paper seeks a better understanding of PR by focusing on other questions:
(3) What is PR? 
(4) How is it acquired? 
(5) How is it lost? 
(6) Why does it exist?
We argue that PR is not about pronouns.
First, cross-linguistic evidence shows that while person is expressed on pronouns in English, in many other languages it is expressed on verbs. PR is really “person reversal.” We make two cross-linguistic predictions explicit.
Second, we argue that at a more fundamental level, PR is about the kinds of utterances in which PR appears. We distinguish two kinds of utterances: (5) S-clones, which closely approximate (“clone”) the structure (including person) of utterances the child has heard (and how this differs from “imitation”) (6) Independent utterances (“indies”) initiated by the child S-clones predominate in the early production of young children, for whom constructing indies is far more difficult.
The empirical heart of this paper lies in our evidence from two longitudinal case studies of person-reversing children showing that PR predominates in Sclones, adult pronoun usage in indies (even in the same time frame). The data illuminates contrasts between S-clones and indies.
What is PR? Why does it exist? We argue that PR is the expression of person in S-clones. As such, it derives from the source utterances from which S-clones are cloned. PR exists because it is a consequence of children’s S-cloning of heard utterances in the early stages of language acquisition. We show how this provides an account of how PR is acquired and maintained, using data from ASL, Slovenian, and English. As for its loss, our analysis makes a prediction: as children learn to construct indies and the ratio of S-clones to indies in their production declines correspondingly, so will the incidence of PR. The data currently available supports this prediction, but more data is needed to confirm or refute it.
We conclude by noting the potential utility of the concepts “S-clone” and “indie” in the study of language acquisition in general. We speculate that gaining the ability to construct indies, and to do so at an ever-increasing rate, is a significant turning point in the acquisition of language.


An old-fashioned theory of digital propaganda

Tyler Marghetis Psychological and Brain Sciences, Indiana University

+ more

Sanders’ stump speeches. Family dinner diatribes. Water-cooler screeds. When listening to others, their utterances can reshape our thinking to conform to theirs, sometimes against our will. Such is the power of propaganda. I’d like to consider one possible mechanism of this mind-control: “digital propaganda,” where digital retains its traditional reference to digits, fingers. By digital propaganda, therefore, I mean the use of co-speech gesture to propagate and perpetuate specific beliefs and larger conceptual frameworks.
In this talk, I focus on the propagation of entirely abstract domains—such as math, time, economics, or family relations. First, extending classic work on concrete, literal gestures, we demonstrate that metaphorical gestures can completely reverse interpretations of accompanying abstract speech. This occurs even when the listener is unaware of the source of their interpretation, misremembering gestural information as having been in speech. Next, we show that these metaphorical gestures have downstream effects on subsequent reasoning, mediated by the effect of gesture on interpretation. And we show that digital propaganda isn’t limited to isolated facts but can shape the mental representation of an entire abstract domain. In the spirit of clearing the file-drawer, I end by reporting a rather frustrating experimental failure in which metaphorical gestures had little or no impact on comprehension. (Interpretative help is welcome!) The hands, therefore, are a tool for digital propaganda, spreading abstract beliefs and encompassing frameworks -- at least sometimes.


Resumptive Pronouns: What can we learn from an ungrammatical construction about grammar, sentence planning, and language acquisition?

Adam Morgan UCSD, Psychology

+ more

"This is an example of a structure that nobody knows why we use it." Resumptive pronouns, like the "it" in the previous sentence, present a problem for standard accounts of grammar. On one hand, English speakers report that they sound bad, which typically indicates ungrammaticality. On the other hand, corpus and experimental work show that English speakers reliably produce resumptive pronouns in certain types of clauses, which seems to imply grammatical knowledge. Furthermore, resumptive pronouns exist and are grammatical in other languages, including Hebrew, Gbadi, and Irish. But if Hebrew- and English-speaking children are exposed to resumptive pronouns, then why does only the former group grammaticize them? In this talk, I will present a series of paired production and acceptability judgment studies whose results indicate that resumptive pronouns in English are a by-product of an early breakdown in production. I will then present pilot data from a production task in Hebrew, and discuss implications for the learnability of a grammatical pattern as a function of its frequency in the language.


Language to Literacy: The facilitative role of early vocabulary in English

Margaret Friend SDSU Psychology

+ more

The perspective that emerging literacy is dependent upon earlier developing language achievement guides the present paper. Recent large-scale studies have demonstrated a relation between early vocabulary and later language and literacy. Of particular interest are the mechanisms by which vocabulary comprehension in the 2nd year of life might support the acquisition of skills related to kindergarten readiness in the 5th year. Toward this end, we contrast parent report of early vocabulary with a direct, decontextualized assessment. Study 1 assesses the relation between word comprehension in the 2nd year and kindergarten readiness in the 5th year controlling for language proficiency in a group monolingual English children. As expected, decontextualized receptive vocabulary at 22 months emerged as a significant predictor of kindergarten readiness accounting uniquely for 29% of the variance when controlling for parent reported vocabulary, maternal education, and child sex. This effect was fully mediated by decontextualized vocabulary in the 5th year such that concurrent PPVT scores accounted for 34% of the variance when controlling for maternal education, child sex, and early vocabulary. Importantly, early vocabulary significantly predicted PPVT scores accounting for 19% of the variance. Study 2 replicates these findings in a sample of monolingual French children. Finally, Study 3 extends this general pattern of findings to a sample of French-English bilingual children. It is argued that early, decontextualized vocabulary supports subsequent language acquisition which in turn allows children to more readily acquire skills related to emergent literacy and kindergarten readiness.


How to speak two languages for the price of one

Daniel Kleinman Beckman Institute, University of Illinois

+ more

Bilinguals often switch languages spontaneously even though experimental studies consistently reveal robust switch costs (i.e., it takes more time to respond in a language different than the one used on the previous trial). Do bilinguals always make these spontaneous switches despite the costs, or can switching be cost-free under circumstances that lab tasks don’t capture? I will discuss several picture naming experiments (conducted with collaborator Tamar Gollan) in which bilinguals were instructed to switch languages in such a way that they would only switch when the name of the concept they wanted to express was more accessible in the language they were not currently speaking. These instructions, which constrained bilinguals’ language choices, led them to switch between languages without any cost, and even maintain two languages in readiness as easily as a single language. In contrast, when bilinguals were given full freedom to switch between languages at any time, most opted for less efficient strategies that led to switch costs. These results demonstrate that cost-free language switching and language mixing are possible and that language switching efficiency can be increased by reducing choice.


An ERP study of predictability and plausibility in sentence processing

Megan Bardolph UCSD, Cognitive Science

+ more

Because of the underlying structure present in language, many models of language processing suggest that people predict not only general semantic content of discourse, but also specific lexical features of upcoming words in sentences. I will present an ERP study that explores the nature of predictability and plausibility in sentence processing. This fine-grained analysis shows how measures of predictability (including sentence constraint, cloze probability, and LSA) and plausibility affect ERP measures of processing, both the N400 and late positivities.


Gird your loins! A conversation about emotion, embodiment, and swearing

Ben Bergen and Piotr Winkielman UCSD

+ more

Cognitive Science professor and psycholinguist Ben Bergen and Psychology professor and emotion researcher Piotr Winkielman will have a discussion about mind, body, and profanity. Audience participation is encouraged, so please come with questions!


Why (and when) do speakers talk like each other?

Rachel Ostrand Cognitive Science, UCSD

+ more

During interactive dialogue (conversation) as well as in non-interactive speech (e.g., answering questions or speech shadowing), speakers modify aspects of their speech production to match those of their linguistic partners. Although there have been many demonstrations of this "linguistic alignment" for different (para-)linguistic features (e.g., phonology, word choice, gesture, speech rate), different speakers of a language can vary considerably in such features (e.g., I might speak quickly and you speak slowly, even when saying the same content). Thus, truly comprehensive alignment will require some degree of partner-specific alignment. Does partner-specific alignment arise because speakers can keep track relevant linguistic features independently for different conversational partners? Or is alignment driven by across-the-board (i.e., partner-nonspecific) representations of the distributions of linguistic features? I'll discuss the results of five experiments, which show that when the overall distributions of syntactic constructions is balanced across an experimental session, people do not show partner-specific alignment, even when individual partners produce distinct and systematic syntactic distributions. However, when the overall distribution of syntactic constructions is biased within an experimental session -- across all partners -- speakers do align to that bias. Thus, speakers align to their recent syntactic experience, but only across overall, rather than partner-specific, statistics. Thus, in the syntactic domain (and perhaps in all non-referential domains), any partner-specific alignment that speakers exhibit is seems to be biased based on overall experience, rather than because speakers track and then align to their partners’ statistically biased behaviors in a partner-specific way.


Mothers’ speech and object naming contingent on infants’ gaze and hand actions

Lucas Chang Cognitive Science, UCSD

+ more

Language input contributes to infants’ learning and predicts their later language outcomes, yet occurs in a dynamic social context. A growing body of research indicates that caregivers’ responsiveness to infants facilitates language acquisition. I will present a longitudinal study of mother-infant interactions that sheds light on how contingent responsiveness makes language accessible to infants. In addition to eliciting a greater volume of maternal speech, infants’ exploratory gaze and hand actions also change the nature of the input to associative learning systems: associations arise not only between self-generated actions and caregiver responses, and between caregiver speech and external referents, but jointly among all these modalities.


How The Eyes “Read” Sign Language: An Eyetracking Investigation of Children and Adults during Sign Language Processing

Rain Bosworth Psychology, UCSD

+ more

Whether listening to spoken sentences, watching signed sentences, or even reading written sentences, the behaviors that lead to successful language comprehension can be characterized as a developed perceptual skill. Over four prolific decades, Keith Rayner pioneered eyetracking research showing how eye-gaze behavior during reading text and scene perception is affected by perceptual, linguistic, and experiential factors. In comparison, much remains unknown about how signers “read” or “watch” sign language. In this talk, we report progress on recent experiments that were designed to discover correlations amongst measures of gaze behavior, story comprehension, and Age of ASL Acquisition (AoA) in children and adults. Using the 120X Tobii eyetracker, we found that, compared to late and novice signers, early native signers exhibited more focused fixations on the face region and smaller scatter in their gaze space. Remarkably, these mature skilled gaze patterns were already found in our youngest native signers by 3 to 5 years of age. Among adults, smaller vertical gaze space was highly correlated with earlier AoA, better comprehension, and higher lexical recall. This led us to ask whether these focused gaze patterns are merely indicators of high perceptual skills or whether they could also cause better perceptual processing. To test this, we examined a group of novice ASL students who were explicitly instructed to fixate on the face and not move their eyes while watching stories, mimicking the skilled gaze behavior seen in early signers. Eyetracking data showed that their gaze patterns changed according to the instructions, and moreover, that this change resulted in better comprehension accuracy. Current data suggests that age-related changes in passive eye gaze behavior can provide a highly sensitive index of normal sign language processing. We hope to use these findings towards promoting perceptual behaviors that support optimal language processing in deaf signing children.


Language Research: Theory and Practice

Stephanie Jed Literature, UCSD

+ more

For Galileo, Kepler, Bacon and others, linguistic competence in Latin and Greek was a foundation for scientific research. Without the ability to read and write in a “foreign” language, it was thought in the 16th and 17th centuries, modern scientists would not be able to articulate the epistemological and methodological grounds of their research and discoveries (Westman). Language-learning – and continued exercise in reading and writing – was, therefore, an integral part of scientific training and an integral dimension of scientific creativity and method. Today, however, the learning of a language is generally divided from research on language and the brain. Courses (in linguistics, cognitive science, neuroscience, psychology etc.) that examine linguistic structures, language acquisition, language development, language processing, language perception, language and memory, language and learning, language and the sensory motor system, etc. generally do not offer any practice of advanced language learning. In this presentation, I will ask what may be lost in this disciplinary division. Outlining the proposal of an upper division course that would integrate language-learning with research in embodiment, the sensory motor system, the mirror neuron hypothesis, and other topics, I invite brainstorming and collaboration from the CRL community in the design of a new integrated course in language research - theory and practice.


Connectionist morphology revisited

Farrell Ackerman and Rob Malouf

+ more

In naturally occurring text, the frequencies of inflected wordforms follow a Zipfian distribution, with a small set of inflected forms occurring frequently and a long tail of forms that are rarely (or never) encountered. For languages with complex inflectional systems (e.g., in the Sino-Tibetan language Khaling, each verb can have up to 331 different forms based on up to ten distinct stems and there are numerous verb classes), most inflected forms of most words will never be observed. Consequently, speakers will necessarily be faced with what Ackerman, et al. (2009) pose as the Paradigm Cell Filling Problem: how do speakers reliably predict unknown inflected forms on the basis of a set of known forms. Recent theoretical approaches to this problem (e.g., Ackerman & Malouf 2013, Bonami & Beniamine 2015, Blevins 2016, Sims 2016, among others) have emphasized the role of implicational relations and analogy, but despite intriguing results concerning information-theoretical principles of paradigm organization, various aspects of learning have proven difficult to formalize. In this talk, we discuss the role that connectionist models of inflection can play in solving the PCFP. Connectionist models of morphological learning inspired a vigorous debate in the 1980's and early 1990's over quite simple morphological phenomena: many theoretical linguists were convinced by Pinker & Prince (1988) and others that connectionist models could not treat morphology as successfully as symbolic analyses in linguistic theory. However, over the past 10 or so years morphological theory has developed beyond familiar morpheme-based perspectives with new word-based models and modern "deep learning" connectionist models capable of identifying new patterns of data and principles conceding complex morphological organization. We will explore some new directions in morphological analysis, with particular attention to some preliminary results in the connectionist learning of complex morphological paradigms.


Text and discourse validation

Murray Singer

+ more

Beyond the processing of language at the word, sentence, and message levels, there is accumulating evidence that readers engage in the continual validation of message consistency and congruence. I will outline the theoretical framework in which I have investigated this phenomenon. Empirical evidence will be presented pertaining to (a) the basic phenomenon, (a) validation of presupposed versus focused text ideas, and (c) individual differences in validation processing. Validation principles emerging from work in numerous labs will be identified. Strategies for reconciling validation successes and failures will be considered.


Investigating Children’s Testimonial Learning: Sources of Protection and Vulnerability

Melissa Koenig Institute of Child Development, University of Minnesota

+ more

Much of what we know we learn from what others tell us. My research program examines testimonial learning by focusing on children’s reasoning about sources. In this research, we focus on two kinds of estimates children make about speakers: estimates of their knowledge and their responsibility. Using these two types of estimates, I will discuss sources of protection and vulnerability that characterize children’s learning decisions. First, I will suggest that as soon as children can monitor the truth of a message, they show an interest in assessing the grounds or reasons that speakers have for their claims. Second, I’ll argue that while children are ready to flexibly adjust their epistemic inferences in line with a speaker’s behavior, children’s interpersonal assumptions of responsibility may be more culturally variable, and harder to undermine. Findings will be discussed in relation to categories of protection that are shared with adults, as well as implications for the role that interpersonal trust may play in testimonial learning.


Resolving Quantity- and Informativeness-implicature in indefinite reference

Till Poppels Linguistics, UCSD

+ more

A central challenge for all theories of conversational implicature (Grice, 1957, 1975) is characterizing the fundamental tension between Quantity (Q) implicature, in which utterance meaning is refined through exclusion of the meanings of alternative utterances, and Informativeness (I) implicature, in which utterance meaning is refined by strengthening to the prototypical case (Atlas & Levinson, 1981; Levinson, 2000). Here we report a large-scale experimental investigation of Q-I resolution in cases of semantically underspecified indefinite reference. We found strong support for five predictions, strengthening the case for recent rational speaker models of conversational implicature (Frank & Goodman, 2012; Degen, Franke, & Jäger, 2013): interpretational preferences were affected by (i) subjective prior probabilities (Informativeness), (ii) the polarity and (iii) the magnitude of utterance cost differentials (Quantity), (iv) the felicity conditions of indefinite NPs in English, and (v) the ‘relatability’ of X and Y.


Emergence of space-time mappings in communication: Initial biases and cultural evolution

Esther Walker and Tessa Verhoef Cognitive Science, UCSD

+ more

Humans spatialize time. This occurs in artifacts like timelines, in spontaneous gestures, and in conventional language ("think BACK to the summer"). These links between space and time, moreover, exist both as associations in individual minds and as shared, cultural systems that transcend individuals. Understanding the origins of this "tangle of space and time" will require analyses at multiple levels, from initial individual biases, to local cultural norms, to cultural evolution (Núñez and Cooperrider, 2013). Where do these space-time links come from, and how are individual biases related to cultural norms?
Here we present a series of laboratory experiments using methods from the field of Language Evolution to simulate the cultural emergence of space-time mappings. In a first communication game experiment, dyads had to communicate about temporal concepts using only a novel, spatial signaling device. Over the course of their interaction, participants rapidly established semiotic systems that mapped systematically between time and space, reflecting both improvisation and social coordination. These semiotic systems exhibited a number of similarities -- but also striking idiosyncrasies. Ongoing research is investigating how these initial systems will change as they are propagated repeatedly. We predict that cultural transmission across multiple "generations" will produce increasingly regular and stable semiotic systems, systems that entrench and reproduce both shared biases and idiosyncratic "historical accidents." By foregrounding the interaction of mechanisms that operate on disparate timescales, laboratory experiments can shed light on the commonalities and variety found in space-time mappings in languages around the world.


Lateralization of the N170 for word and face processing in deaf signers

Zed Sevcikova Sehyr School of Speech, Language, and Hearing Sciences, SDSU

+ more

Left-lateralization for words develops before right-lateralization for faces, and hemispheric specialization for faces may be contingent upon prior lateralization for words (Dundas, Plaut & Behrmann, 2014). We examined the relationship between word and face processing for deaf native users of American Sign Language who have distinct developmental experiences with both words and faces (e.g., the face conveys linguistic information). We investigated whether hemispheric organization of word and face recognition (indexed by lateralization of the N170) is uniquely shaped by sign language experience. Hearing non-signers and deaf signers made same-different judgments to pairs of words or faces (192 trials each), where the first stimulus was presented centrally and the second was presented to either the left (LH) or right hemisphere (RH). EEG was recorded to the centrally presented stimulus and referenced to the average of all 32 electrode sites. We observed a similar pattern of N170 laterality for deaf and hearing participants, but with a different scalp distribution. For both groups, the N170 to words was larger at LH occipital sites, but only hearing participants also showed a larger N170 at LH temporal sites. For faces, deaf signers showed a larger N170 response at RH temporal sites, with a weaker amplitude difference at occipital sites. Hearing participants showed a similar RH lateralized response over both temporal and occipital sites. Thus, lateralization for words and faces appears similar for deaf and hearing individuals, but differences in scalp distribution may reflect unique organization of visual pathways in the occipitotemporal cortex for deaf signers.


Pushing the boundary of parafoveal processing in reading

Mallorie Leinenger & Liz Schotter Psychology, UCSD

+ more

When we read, we look directly at (i.e., foveate) a word while at the same time obtaining a preview of the word(s) to come, in parafoveal vision. The current theory of reading is that parafoveal processing is used to facilitate subsequent foveal processing. That is, fixation durations on the subsequent foveal target word are shorter when the reader had an accurate (i.e., identical) parafoveal preview of that word than when the preview stimulus had been replaced with something else (i.e., in a gaze-contingent display change paradigm; Rayner, 1975). The presumed mechanism for this facilitated processing is integration of parafoveal preview and foveal target information across saccades, which is easier when the two words are similar. However, we suggest that there are cases in which processing of the parafoveal preview can directly influence fixation behavior on the foveal target, even in the absence of similarity between preview and target. Thus, we hypothesize that, if easy to process, the preview stimulus can be used to pre-initiate future eye movement programs, leading to fairly short fixations on any target stimulus. In this talk, we describe two experiments that find evidence for this alternative hypothesis and we explain how these effects may be accommodated by an existing model of oculomotor control in reading.


Conceptual Integration and Multimodal Discourse Comprehension

Seana Coulson Cognitive Science, UCSD

+ more

In face to face conversation, understanding one another involves integrating information activated by our interlocutors' speech with that activated by their gestures. I will discuss a series of studies from my lab that have explored the cognitive processes underlying speech-gesture integration. These studies indicate the importance of visuo-spatial working memory resources for understanding co-speech iconic gestures.


Word learning amidst phonemic variability

Conor Frye Cognitive Science, UCSD

+ more

There is a major assumption that a language learner’s initial goal is to detect specific sound categories in that language, and that these sound categories and their perceptual boundaries are fairly fixed in adulthood. These studies and theoretical accounts imply that learners should no longer be able to learn phonemically variable words as the same word—for example, that div and tiv are equivalent labels for a novel concept. We provide evidence that categories are much more plastic and can be modified and merged, even in adulthood, and that exposure to different probability distributions alters functional phoneme boundaries nearly immediately. Such malleability challenges the psychological relevance of phonemes for learning and recognizing words, and argues against the primacy of the phoneme in word representations in favor of a more probabilistic definition of word and speech sound identity.


Iconicity, naturalness and systematicity in the emergence of sign language structure

Tessa Verhoef, Carol Padden, and Simon Kirby Center for Research in Language, UCSD

+ more

Systematic preferences have been found for the use of different iconic strategies for naming man-made hand-held tools (Padden et al., 2014) in both sign and gesture: HANDLING (showing how you hold it) and INSTRUMENT (showing what it looks like) forms are most frequently used. Within those two, sign languages vary in their use of one strategy over the other (Padden et al., 2013). Such lexical preferences across different sign languages provide an ideal test case for understanding the emergence of conventions in language in which multiple types of bias are at play. Specifically, we argue that there may be distinct biases operating during production and interpretation of single signs on the one hand, and learning a conventional system of signs on the other. It is crucial we understand how these distinct biases interact if we are to explain the emergence of systematicity in a linguistic system with iconic underpinnings. We present three experiments that together help to form a picture of the interplay between naturalness, iconicity and systematicity in the origin of linguistic signals. The first experiment maps out people's initial natural biases towards the two strategies for naming tools, the second investigates the effects of these biases on the learnability of artificial languages and the third tests the flexibility of participant’s biases when they are exposed to specific types of data. Our results show that non-signers quickly detect patterns for which they need to categorize abstract iconic gesture strategies, while there is a subtle interplay between learning biases and natural mapping biases. Natural mapping biases seem to strongly influence one-off judgments on individual items while a bias for systematicity takes effect once there is exposure to sets of structured data.


Measuring Conventionalization in the Manual Modality

Savithry Namboodiripad, Dan Lenzen, Ryan Lepic, and Tessa Verhoef Linguistics, UCSD

+ more

Gestures produced by users of spoken languages differ from signs produced by users of sign languages in that gestures are more typically ad hoc and idiosyncratic, while signs are more typically conventionalized and shared within a language community. To study how gestures may change over time as a result of the process of conventionalization, we designed a social coordination game to elicit repeated silent gestures from hearing nonsigners, and used Microsoft Kinect to unobtrusively track the movement of their bodies as they gestured (following Lenzen, 2015). Our approach follows both a tradition of lab experiments designed to study social coordination and transmission in the emergence of linguistic structure (Schouwstra et al., 2014) and insights from sign language research on language emergence. Working with silent gesture, we were able to simulate and quantify effects of conventionalization that have been described for sign languages (Frishberg, 1975), including changes in efficiency of communication and size of articulatory space, in the laboratory.With Kinect we were able to measure changes in gesture that are also the hallmarks of conventionalization in sign language. This approach opens the door for more direct future comparisons between ad hoc gestures produced in the lab with natural sign languages in the world.


Pronominal ambiguity resolution in Japanese benefactive constructions

Kentaro Nakatani Linguistics, UCSD/Konan University

+ more

Japanese benefactive constructions ("do something for somebody") usually involve auxiliary uses of verbs of giving. Because Japanese has two contrastive giving verbs, kureru 'give (to the speaker)' and ageru 'give (to a non-speaker)' (which are sort of transitive versions of coming and going), the corresponding two types of benefactive constructions can be constructed, depending on who the beneficiary is. This feature usually does good to the processing of Japanese, a massively pro-drop language, because null arguments can be recovered by the choice of these benefactive verbs. Things could get complicated, however, when the existence of an adjunct clause combined with the use of null pronouns leads the comprehender to a specific resolution of referential ambiguity of these null pronouns, and then it eventually turns out that this resolution contradicts the interpretive requirements of the benefactive verbs (i.e., who the beneficiary should be).
While the previous studies have pointed out the processing load (supposedly) triggered by a structural reanalysis in such benefactive constructions, what has been overlooked is the effect of pragmatic inferences made between the embedded adjunct clause and the main clause. In this study, I will show that the inter-eventive pragmatic inferences affect the ease of comprehension in the opposite direction depending on the choice of the benefactive verbs, reporting the results from 2 self-paced reading experiments and a forced-choice query.


Repetition and information flow in music and language

Davy Temperley Music Theory, Eastman School of Music

+ more

In the first part of this talk I will report on some recent research on the use of repetition in language and music. A corpus analysis of classical melodies shows that, when a melodic pattern is repeated with an alteration, the alteration tends to lower the probability of the pattern - for example, by introducing larger intervals or chromatic notes (notes outside the scale). A corpus analysis of written English text shows a similar pattern: in coordinate noun-phrase constructions in which the first and second phrases match syntactically (e.g. "the black dog and the white cat"), the second phrase tends to have lower lexical (trigram) probabilities than the first. A further pattern is also observed in coordinate constructions in language: the tendency towards "parallelism" (syntactic matching between the first and second coordinate phrases) is much stronger for rare constructions than for common ones (the "inverse frequency effect"). (There is some evidence for this phenomenon in music as well.) I will suggest that these phenomena can be explained by Levy and Jaeger's theory of Uniform Information Density (UID): repetition is used to smooth out the "spikes" in information created by rare events.
In the second part of the talk I will focus further on the inverse frequency effect, and suggest another factor that may be behind it besides UID. I will argue that it may facilitate sentence processing, by constraining the use of rare syntactic constructions to certain situations - essentially, situations in which they are repeated. This helps to contain the combinatorial explosion of possible analyses that must be considered in sentence processing. I will relate this to another type of rare syntactic construction, "main clause phenomena" - constructions that occur only (or predominantly) at the beginning of a main clause, such as participle preposing and NP topicalization. This, too, can be explained in processing terms: since processing the beginning of a sentence requires little combinatorial search, it is natural that a greater variety of constructions would be allowed there.


New space-time metaphors foster new mental representations for time

Rose Hendricks Cognitive Science, UCSD

+ more

Do linguistic metaphors give rise to non-linguistic representations? If so, then learning a new way of talking about time should foster new ways of thinking about it. We describe a set of studies in which we trained English-speaking participants to talk about time using vertical spatial metaphors that are novel to English. One group learned a mapping that placed earlier events above and the other a mapping that placed earlier events below. After mastering the new metaphors, participants were tested in a non-linguistic implicit space-time association task – the Orly task. This task has been used previously to document cross-linguistic differences in representations of time (Boroditsky et. al 2010; Fuhrman et al 2011). Some participants completed temporal judgments in the Orly task without any other secondary task, while others did so under either verbal or visual interference. Overall, the system of metaphors that participants were trained on influenced their performance on the Orly task and did not differ among the three interference conditions, although the effect did not reach significance for participants in the verbal interference condition. This suggests that as a result of learning a new metaphor, people developed new implicit metaphor-consistent ways of thinking about time. Finally, a serendipitous sample of Chinese-English bilinguals, who are already familiar with vertical metaphors for time, provided us with the opportunity to investigate what happens when natural language metaphors and newly acquired ones conflict. These participants demonstrated a combination effect, in which both long-term and immediate experience shaped their thinking. I'll share the work that has been done on this project and the directions we hope to pursue going forward.


Feshing fur phonims: Learning words amidst phonemic variability

Conor Frye Cognitive Science, UCSD

+ more

There is a major assumption that a language learner’s initial goal is to detect specific sound categories in that language, and that these sound categories and their perceptual boundaries are fairly fixed in adulthood. These studies and theoretical accounts imply that learners should no longer be able to learn phonemically-differing words as the same word—for example, that paff and baff are equivalent labels for a novel concept. We provide evidence that categories are much more plastic and can be modified and merged, even in adulthood, and that exposure to different probability distributions alters functional phoneme boundaries. Such malleability challenges the psychological relevance of phonemes for learning and recognizing words, and argues against the primacy of the phoneme in word representations in favor of a more probabilistic definition of word and speech sound identity.


Studying plasticity for speech perception in the brain: False starts and new trails

Jason Zevin Psychology and Linguistics, USC

+ more

People typically and spectacularly fail to master the speech sound categories of a second language (L2) in adulthood. In searching for the neural basis of this phenomenon, we have begun to suspect that the neural indices of difficulties in adult L2 speech perception reflect the behavioral relevance of the stimuli, rather than any basic perceptual function relevant to stimulus categorization. I will present evidence for this interpretation, followed by some proposals for what to do about it. One strategy is to focus on how people succeed in understanding L2 speech rather than their failure to categorize speech sounds in ostensibly neutral experimental contexts. We can look, for example, at correlations in brain activity while people listen to discourses of varying lengths. Or we can look at the dynamics of word recognition in simulated communicative contexts. I will be presenting some data from our first steps in these directions.


What You See Isn't Always What You Get

Rachel Ostrand Cognitive Science, UCSD

+ more

Human speech perception often includes both auditory (the speaker's voice) and visual (the speaker's mouth movements) components. Although these two sensory signals necessarily enter the brain separately through different perceptual channels, they end up being integrated into a single perception of speech. An extreme example of this integration is the McGurk Effect, in which the auditory and visual signals conflict and the listener perceives a fusion of the two differing components. My research addresses when this auditory-visual integration occurs: before or after lexical access. Namely, does the visual information that is integrated into the (more reliable) auditory signal have any influence over which word gets activated in the lexicon, or does it merely contribute to a clearer perceptual experience? Which signal is used to access the lexicon to identify the word a listener just perceived - the integrated auditory-visual percept, or the raw auditory signal? If it's the former, then the visual information of a speaker's mouth movements fundamentally influences how you perceive speech. If it's the latter, then when you fall prey to the McGurk Effect (or are in a noisy bar), although you perceive one word, you lexically access another. Or maybe it's both?!


Two methodological principles of phonological analysis

Eric Bakovic Linguistics, UCSD

+ more

The phonological forms of morphemes often alternate systematically, depending on context. The methodological starting point of (generative) phonological analysis is to posit unique underlying mental representations for alternating morphemes, consisting of the same basic units of analysis as their systematically alternating surface representations, and to derive those systematically alternating surface representations using context-sensitive transformations.
Two further methodological principles, the Distribution Principle and the Reckonability Principle, come into play in deciding what the correct underlying representation of a morpheme is. In this talk I define these two principles and describe how they are used in phonological analysis. I focus in particular on a fundamental difference between the two principles: the Distribution Principle follows as a necessary consequence of the methodological starting point identified above, whereas the Reckonability Principle satisfies criteria of formal simplicity and makes an independent contribution only when the Distribution Principle is not applicable.
It is rarely if ever the case that these two methodological principles come into conflict in an analysis of actual phonological data, but the difference between them entails that the Distribution Principle will trump the Reckonability Principle if they ever were to conflict. I present analyses of a prototypical case in two theoretical models, one (Harmonic Grammar) predicting that the conflict is instantiable and the other (Optimality Theory) predicting that it is not, and discuss the potential significance of the apparent fact that the conflict is not (robustly) instantiated in actual phonologies.


Give me a quick hug! Event representations are modulated by choice of grammatical construction

Eva Wittenberg Center for Research in Language, UCSD

+ more

When you talk about a particular event, grammar gives you lots of options. You can use different verb forms, active or passive, topicalizations, or other grammatical devices to highlight, modulate, include or exclude very subtle aspects of the event description. I will be presenting a special case of grammatical choice: light verb constructions, like „Charles gave Julius a hug“, their base verb construction counterparts, like „Charles hugged Julius“, and non-light, syntactically similar constructions, like „Charles gave Julius a book". With data from several experiments, I will show that light verb constructions are not only processed differently from other constructions, but that they also evoke very particular event representations, modulating not only the processing of thematic roles, but also imagined event durations.


Poverty, dialect, and the “Achievement Gap”

Mark Seidenberg University of Wisconsin, Madison

+ more

Research in cognitive and developmental psychology and in cognitive neuroscience has made enormous progress toward understanding skilled reading, the acquisition of reading skill, the brain bases of reading, and the causes and treatment of reading impairments. The focus of my talk (and a forthcoming book) is this question: if the science is so advanced, why do so many people read so poorly? Everyone knows that when it comes to reading, the US is a chronic underachiever. Literacy levels in the US are low compared to other countries with fewer economic resources. About 30% of the US population has only basic reading skills, and the percentages are higher among lower income and minority groups. I’ll examine arguments by Diane Ravitch and others that attribute poor reading achievement in the US to poverty, and present recent behavioral and modeling evidence concerning the role of language variation—dialect—in the black-white achievement gap in reading. I will suggest that there are opportunities to increase literacy levels by making better use of what we have learned about reading and language but also institutional obstacles and understudied issues for which more evidence is badly needed.


Does verbal description enhance memory for the taste of wine?

Rachel Bristol & Seana Coulson Cognitive Science, UCSD

+ more

We will ask participants to sample wine and either describe their perceptual experience or perform a control task. Memory for these experiences will be informally tested to examine the impact (if any) of verbal description. Besides wine, a variety of tasty snacks and non-alcoholic beverages will be available for consumption. Attendees are encouraged to engage in social interaction so as to promote a naturalistic environment for participants. This event will begin at 3:30 and last until 5pm and attendees are welcome to arrive late or leave early.


Fail fast or succeed slowly: Good-enough processing can mask interference effects

Bruno Nicenboim Potsdam University

+ more

In memory research, similarity-based interference refers to the impaired ability to remember an item when it is similar to other items stored in memory (Anderson & Neely, 1996). Interference has been shown to be also relevant to language comprehension processes. On a cue-based retrieval account (Van Dyke & Lewis, 2003; Lewis & Vasishth, 2005), grammatical heads such as verbs provide retrieval cues that are used to distinguish between the target item and competitors in memory. Similarity-based interference occurs when items share cues (such as number, syntactic category, etc), which makes it harder to distinguish between them, causing both longer reading times (RTs) and lower question-response accuracy. Since lower accuracy could be the result from either incorrectly retrieving a competitor or simply failing to complete a retrieval (an unstarted or aborted process), it is unclear how RTs are related to question-response accuracy. We conducted a self-paced reading experiment that investigated interference effects in subject-verb dependencies in German. We found the expected retrieval interference effect: longer RTs as well as lower accuracy in high interference conditions vs. low interference ones. In addition, we fitted hierarchical multinomial processing trees (MPT, Riefer and Batchelder, 1988; Matzke et al., 2013) using Stan Modeling Language to estimate the latent parameters underlying the comprehension accuracy: probability of any retrieval, probability of a correct retrieval, and bias to guess Yes (in comparison to No). We show that the estimates of the underlying parameters can uncover a complex relationship between accuracy and RTs: high interference causes longer RTs at successful retrievals, but it also causes a higher proportion of incomplete retrievals that lead in turn to lower accuracy and shorter RTs.


The grammar of emotions: Word order, particles, and emphasis

Andreas Trotzke Linguistics, University of Konstanz

+ more

In this talk, I provide evidence for a pragmatic notion of emphasis that is closely related to mirativity, a kind of evidentiality marking by which an utterance is marked as conveying information that is unexpected or surprising to the speaker. Certain options of German word order, sometimes in combination with discourse particles, yield an emphatic character that is typical for the expressive side of utterances and endows them with an exclamative flavor. Cross-linguistic evidence offers good reasons to assume that (at least certain forms of) emphatic marking must be distinguished from information structure. I introduce a new phenomenon in this context, namely cases of co-constituency of discourse particles and wh-elements in the left periphery of the clause. I argue that this construction shows several features of emphasis, and I substantiate my claim by a phonetic experiment that investigates whether the construction shows some of the core characteristics of emotive speech.


Cross-cultural diversity in narrative structure: Towards a linguistic typological approach to visual narrative

Neil Cohn Department of Cognitive Science, UCSD

+ more

While extensive research has studied the structure of language and verbal discourse, only recently has cognitive science turned towards investigating the structure of visual narratives like those found in comics. This work on the “narrative grammar” of sequential images has identified several structural patterns in visual narratives. To examine the extent of these patterns in actual narrative systems, we examined a corpus of roughly 160 comics from across the world (American comics, Japanese manga, Korean manhwa, OEL manga, French bande desinée, and German comics) constituting approximately 18,000 panels. Our analysis will show that visual narratives differ between cultures in systematic ways across several dimensions, including linear semantic coherence relations between images, the attentional framing of scenes, and the narrative constructions used in sequential images. However, these patterns are not restricted to geographic boundaries, but rather to the narrative systems used across authors of a common “style.” That is, these findings will suggest that different systematic narrative grammars are used by “visual languages” used in comics across the world and that underlying typological principles may underlie the structure of narrative systems cross-culturally.


Pragmatic strategies for efficient communication

Leon Bergen Brain and Cognitive Sciences, MIT

+ more

Pragmatic reasoning allows people to adapt their language to better fit their communicative goals. Consider scalar implicatures, e.g. the inference that "Some of the students passed the test" means that not all of them passed. Without this pragmatic strengthening, the only way that a speaker could communicate this meaning is by using the longer and clumsier phrase, "Some but not all." The speaker in this example can be confident that the listener will draw the correct inference, because they share a simple maxim of conversation: be informative. If the speaker had known that all of the students had passed, then saying "All" would have been more informative than saying "Some"; the listener can therefore conclude that not all of the students passed. This type of Gricean reasoning has recently been formalized in models of recursive social reasoning (Franke, 2009; Frank and Goodman, 2012; Jager, 2012), and used to predict quantitative judgments in pragmatic reasoning tasks.

I will discuss recent work on pragmatic inferences which require more than just the assumption of speaker informativeness. This includes a diverse set of phenomena, several of which have not previously been thought to be pragmatic in nature: exaggeration and metaphor, focus effects from prosodic stress, quantifier scope inversion, and embedded implicatures. Drawing on experimental evidence and computational modeling, I will argue that each of these phenomena corresponds to a natural way of augmenting pragmatic reasoning with additional knowledge about the world or the structure of social intentions. These phenomena illustrate both the sophistication of people's pragmatic reasoning, and how people leverage this reasoning to improve the efficiency of their language use.


How cultural evolution gives us linguistic structure

Simon Kirby Language Evolution and Computation, University of Edinburgh

+ more

Evolutionary linguists attempt to explain the origins of the fundamental design features of human language, such as duality of patterning, compositionality or recursion. I will argue that these system-wide properties of language are the result of cultural evolution. We can recreate this process of cultural evolution in the experiment lab and observe closely how structure emerges from randomness as miniature languages are passed down through chains of participants by iterated learning.

I will present two such experiments, one in the gestural modality showing the emergence of conventionalised sign from iconic pantomime, and one using an artificial language learning and interaction task. These experiments show that, contrary to initial expectations, the emergence of structure is not inevitable, but relies on a trade-off between pressures from learning and pressures from communication. I will end the talk by arguing that these results provide a unifying explanation for why complexity in languages appears to correlate inversely with number of speakers, and why Al-Sayyid Bedouin Sign Language appears to lack duality of patterning.


Form, meaning, structure, iconicity

Bart de Boer Artificial Intelligence Lab, Vrije Universiteit Brussel

+ more

This talk explores the relation between structure and iconicity with a combination of computer models and experiments. Iconic structure is a systematic mapping between form and meaning. This may be of influence on how easily signals are learned and understood, and it has been hypothesized that it may have played an important role in early language evolution. However, modern languages make relatively little use of it, and it is a mystery how (evolutionarily) early language has made the tranistion from iconic to conventionalized, structured systems of signals. I will first present a brief introduction to what it means for signals to be iconic and what problems iconic signals pose for a theory of language evolution. I will then present a model of how a transition from iconic to structured signals could take place, as well as preliminary experimental results on whether the model fits human behavior.


The evolutionary origins of human communication and language

Thomas Scott-Phillips Evolutionary and Cognitive Anthropology, Durham University, UK

+ more

Linguistic communication is arguably humanity's most distinctive characteristic. Why are we the only species that communicates in this way? In this talk, based upon my recent book (Speaking Our Minds, Palgrave Macmillan), I will argue that the differences between human communication and the communication systems of all other species is likely not a difference of degree, but rather one of kind. Linguistic communication is made possible by mechanisms of metapsychology, and expressively powerful by mechanisms of association. In contrast, non-human primate communication is most likely the opposite: made possible by mechanisms of association, and expressively powerful by mechanisms of metapsychology. This conclusion suggests that human communication, and hence linguistic communication, evolved as a by-product of increased social intelligence. As such, human communication may be best seen, from an evolutionary perspective, as a particularly sophisticated form of social cognition: mutually-assisted mindreading and mental manipulation. More generally, I will highlight the often neglected importance of pragmatics for the study of language origins.


On the Evolution of Combinatorial Phonological Structure within the Word: Sign Language Evidence

David Perlmutter Linguistics, UCSD

+ more

Human languages, spoken and signed, have combinatorial systems that combine meaningless smaller units to form words or signs. In spoken languages the smaller units are the sounds of speech (phonemes). In sign languages they are handshapes, movements, and the places on the body where signs are made. These constitute phonological structure. Building structure by combining smaller units, phonological structure in both spoken and signed languages is combinatorial. This paper addresses the evolution of combinatorial phonological structure.

Phonological combinatoriality evolved in spoken languages too long ago to be traced. In sign languages that evolution is much more recent and therefore more amenable to study. We argue that signs with combinatorial phonological structure evolved from holistic gestures that lack such structure, tracing the steps in that evolution. We therefore highlight contrasts between signs, products of that evolution, and holistic gestures, from which they evolved.

Combinatoriality gives signs smaller parts whose properties (phonological features) determine how the signs are pronounced. These features surprisingly predict that although signs may resemble the iconic gestures from which they evolved, signs can have anti-iconic pronunciations. Data from American Sign Language (ASL) confirm this prediction.

Since signs’ pronunciation is determined by phonological features of their smaller parts, in a new sign language that has not yet evolved combinatorial phonological structure, there will be no features to constrain signs’ pronunciation. This predicts that in such a language, pronunciation can vary considerably from one signer to another. This prediction is confirmed by data from Al-Sayyid Bedouin Sign Language (ABSL), a newly emerging sign language.

In addition, we briefly present evidence that chimpanzees exposed to ASL for years learned only a small number of holistic gestures, not the combinatorial sign system learned by signers of ASL. This is explained if humans’ combinatorial abilities that are needed to learn the vocabulary of a human language evolved after the human and chimpanzee lineages diverged.


Using text to build predictive models of opinions, networks, and social media

Julian McAuley Computer Science and Engineering, UCSD

+ more

Text is an incredibly rich source of data to build reliable models of human behavior and opinions. Consider tasks such as predicting ratings on Netflix, estimating which pair of jeans is better on Amazon, or predicting which content will "go viral" on Reddit. While such problems have traditionally been approached without considering textual data, in this talk we'll show how models that incorporate text can not only produce more accurate predictions, but can also augment those predictions with interpretable explanations. To achieve this we propose a framework to learn joint embeddings of structured data (e.g. ratings) and text, such that the variation in the former can be explained by (and predicted from) the latter.


The Organization and Structure of Concepts in Semantic Memory

Ken McRae Department of Psychology, University of Western Ontario

+ more

People use concepts and word meaning every day to recognize entities and objects in their environment, to anticipate how entities will behave and interact with each other, to know how objects should be used, to generate expectancies for situations, and to understand language. Over the years, a number of theories have been presented regarding how concepts are organized and structured in semantic memory. For example, various theories stress that concepts (or lexical items) are linked by undifferentiated associations. Other theories stress hierarchical categorical (taxonomic) structure, whereas still others focus on similarity among concepts. In this talk, I will present evidence that people’s knowledge of real-world situations is an important factor underlying the organization and structure of concepts in semantic memory. I will present experiments spanning word, picture, and discourse processing. Evidence for the importance of situation-based knowledge will cover a number of types of concepts, including verbs, nouns denoting living and nonliving things, other types of relatively concrete noun concepts, and abstract concepts. I will conclude that semantic memory is structured in our mind so that the computation and use of knowledge of real-world situations is both rapid and fundamental.


Interaction's role in emerging communication systems and their conventionalization: Repair as a means for the fixation of form-meaning matches

Ashley Micklos UCLA

+ more

Interaction is an inherent aspect of human language use, allowing us to build communication through varied resources, negotiate meanings, and pass down practices of the community. The research presented here address the nature and role of interactional discourse features, namely repair, eye gaze, and turn-taking, in an experimental language evolution setting in which dyads must disambiguate minimally contrastive noun and verb targets using only silent gesture. Here, using a conversation analytic approach, we see how an emerging silent gesture system is negotiated, changed, and conventionalized in dyadic interactions, and how these processes are changed and transmitted over simulated generations. For example, the strategies for and frequency of repair may be indicative of the stage of evolution/conventionalization of a given language system. Furthermore, particular repair strategies may even promote the fixation of certain gestural forms for marking either noun-ness or verb-ness. The data also suggest a cultural preference for certain discourse strategies, which are culturally transmitted along with the linguistic system.


The unrealized promise of cross-situational word-referent learning

Linda Smith Department of Psychology & Brain Science, Indiana University

+ more

Recent theory and experiments offer a new solution as to how infant learners may break into word learning, by using cross-situational statistics to find the underlying word-referent mappings. Computational models demonstrate the in-principle plausibility of this statistical learning solution and experimental evidence shows that infants can aggregate and make statistically appropriate decisions from word-referent co-occurrence data. This talk considers arguments and evidence against cross-situational learning as a fundamental mechanism and the gaps in current knowledge that prevent a confident conclusion about whether cross-situational learning is the mechanism through which infants break into word learning. I will present very new evidence (and theoretical ideas) suggesting we need to different empirical questions.


Gesture Tracking for the Investigation of Syntactic Ambiguity and Cognitive Processing

Hunter Hatfield Department of Linguistics, University of Otago

+ more

Innovations in methodology can be as important to scientific progress as innovations in theory. The Otago PsyAn Lab (OPAL) experimental platform is an open-source set of tools allowing a researcher to design and conduct experiments in a native AndroidTM environment using touchscreen devices. In experiment one, syntactic processing of well-studied phenomena is investigated. In a Self-Guided Reading task, a novel method introduced in this research, participants read sentences by underlining masked text using a finger. The location of the finger was tracked character-by-character. Growth curve analysis revealed significant differences between curves for all sets of stimuli. Moreover, the location of the change in behaviour was at the predicted location in the sentence, which is not consistently revealed by other methodologies. In experiment 2, object and subject relative clauses were investigated. Intriguingly, the point at which the sentence types diverged was earlier than documented using Self-Paced Reading and more in support of Surprisal theories of processing then Locality theories. This research is placed in a broader context of the merits and drawbacks of the touchscreen methods and plans for work beyond just syntactic ambiguity.


Hemispheric Differences in Activating Event Knowledge During Language Comprehension

Ross Metusalem Department of Cognitive Science, UCSD

+ more

Discourse comprehension often entails inferring unstated elements of described scenarios or events through activation of relevant knowledge in long-term memory. Metusalem et al. (2012) examined the degree to which unstated event knowledge elements are activated during incremental comprehension, specifically at points in a sentence at which those elements would constitute semantic anomalies. Using the event-related brain potential (ERP) method, they found that words that violate the local semantic context but align with the described event elicit a reduced N400 compared to equally anomalous words that do not align with the event. This N400 pattern was argued to indicate that real-time activation of event knowledge occurs with at least partial independence from the immediate sentential context.

The present study addresses contributions of the two cerebral hemispheres to the effect observed by Metusalem et al. While the left hemisphere (LH) has been argued to support expectations for upcoming words through semantic feature pre-activation, the right hemisphere (RH) has been shown to activate concepts beyond those that would constitute expected continuations of the sentence in support of discourse-pragmatic processes. It was therefore hypothesized that RH activity may be driving much, if not all, of the difference in N400 amplitude between event-related and event-unrelated anomalous words in Metusalem et al.’s data.

In the present experiment, Metusalem et al.’s stimuli were used, only now with target words presented to the right or left visual field (RVF/LVF) only. This visual half-field presentation provides a processing advantage to the hemisphere contralateral to the visual field of presentation, accentuating processing by that contralateral hemisphere in the scalp-recorded ERP waveforms. The results show that reduction in N400 amplitude for event-related vs event-unrelated anomalies is found only with LVF/RH presentation. This result is discussed with respect to theories of hemispheric specialization in language processing.


Names, Adjectives, & Gender: The Social Evolution of Linguistic Systems

Melody Dye Department of Cognitive Science, Indiana University

+ more

According to a common metaphor, language is a vehicle for encoding our thoughts and decoding those of others, or of ‘packing’ and ‘unpacking’ the stuff of thought into linguistic form. While this can be a useful methodological framing, it has run aground against a number of serious empirical and epistemological challenges. In this talk, I will discuss how information theory can offer a reformulation of the traditional ‘code model’ of communication. On this view, meaning does not reside in words or sentences, but in the exchange – and progressive alignment – of speakers with more (or less) similar codes. Such a perspective emphasizes the importance of uncertainty, prediction, and learning in communication, casting human languages as systems of social exchange that have evolved both to optimize the flow of information between speakers, and to balance the twin demands of comprehension and production. In support of this framing, I will report on a pair of cross-linguistic projects: one, contrasting the evolution of naming systems in the East and in the West, and the other, comparing the functional role of grammatical gender with that of prenominal adjectives across two Germanic languages. This work suggests a principled way of beginning to piece apart those evolutionary pressures on language that are universal, from those that bound to specific social environments.


In constrained contexts, preschoolers’ recognition of accented words is excellent

Sarah Creel Department of Cognitive Science, UCSD

+ more

Do unfamiliar accents impair young children’s language comprehension? Infants detect familiarized word-forms heard in accented speech by 13 months, yet 4-year-olds have difficulty repeating isolated words in unfamiliar accents. The current work attempts to integrate these disparate findings by testing accented word recognition with or without semantic constraint, visual-contextual constraint, and rapid perceptual accent adaptation.

Monolingual English-learning preschoolers (n=32) completed an eye-tracked word recognition test. On each trial, four pictures appeared; 500 milliseconds later, a sentence—sensical or nonsensical, American-accented or Spanish-accented—was spoken. Children attempted to select mentioned pictures as eye movements were tracked. Word-recognition accuracy and visual fixations were higher for sensical than nonsensical sentences. However, accuracy did not differ between accents, and fixations differed only marginally. Thus, preschool-aged children adeptly recognized accented words with semantic and visual-contextual constraint. A second experiment showed lower recognition of Spanish-accented than American-accented words when words are excised from sentences. Throughout, children showed no tendency toward mutual exclusivity responses (selecting a novel object when hearing an accented word), unlike earlier studies of familiar-accented mispronunciations (Creel, 2012). Ongoing work assesses children's accuracy in repeating words (no visual-contextual constraints). Overall, results suggest that decontextualized accented speech is likely to be more difficult for young children to process than is contextually-constrained speech.


Context in pragmatic inference

Judith Degen Department of Psychology, Stanford University

+ more

In the face of underspecified utterances, listeners routinely and without much apparent effort make the right kinds of pragmatic inferences about a speaker’s intended meaning. I will present a series of studies investigating the processing of one type of inference -- scalar implicature -- as a way of addressing how listeners perform this remarkable feat. In particular, I will explore the role of context in the processing of scalar implicatures from “some” to “not all”. Contrary to the widely held assumption that scalar implicatures are highly regularized, frequent, and relatively context-independent, I will argue that they are in fact relatively infrequent and highly context-dependent; both the robustness and the speed with which scalar implicatures from “some” to “not all” are computed are modulated by the probabilistic support that the implicature receives from multiple contextual cues. I will present evidence that scalar implicatures are especially sensitive to the naturalness or expectedness of both scalar and non-scalar alternative utterances the speaker could have produced, but didn’t. In this context I will present a novel contextualist account of scalar implicature processing that has roots in both constraint-based and information-theoretic accounts of language processing and that provides a unified explanation for a) the varying robustness of scalar implicatures across different contexts, b) the varying speed of scalar implicatures across different contexts, and c) the speed and efficiency of communication.


Social robots: things or agents?

Morana Alac Department of Communication

+ more

In our digital, post-analog times, questions of where the world stops and the screen starts, and how to discern the boundary between agency and things are common. A source of such a conundrum are social robots. For their designers, social robots are fascinating as they combine aspects of machines with those of living creatures: they offer the opportunity to ask how matter can be orchestrated to generate impressions of life and sociality. Social science literature on social robots, on the other hand, has mostly engaged the social/agential (and cultural) character of these technologies, leaving the material aspects to their designers. This talk proposes a social science account that is sensitive to both – the objecthood and agency of social robots. It does so by focusing on actual engagements between robots and those who encounter them as a part of everyday practices in social robotics. I pay specific attention to spatial arrangements, body orientation, gaze, use of gesture and tactile exploration in those interactions. In other words, I ask how is the boundary between agency and things practically resolved through a multimodal and multisensory coordinated engagement in the world as we live it.


Conceptual elaboration facilitates retrieval in sentence processing

Melissa Troyer Department of Cognitive Science, UCSD

+ more

Sentence comprehension involves connecting current linguistic input with existing knowledge about the world. We propose that this process is facilitated (a) when more information is known about referents in the sentence and (b) when comprehenders have greater world knowledge. In single sentences, items with more features can exhibit facilitated retrieval (Hofmeister, 2011). Here, we investigate retrieval when such information is presented over a discourse, rather than within a single sentence. Participants read texts introducing two referents (e.g., two senators), one of which was described in greater detail than the other (e.g., ‘The Democrat had voted for one of the senators, and the Republican had voted for the other, a man from Ohio who was running for president’). The final sentence (e.g., ‘The senator who the {Republican / Democrat} had voted for…’) contained a relative clause picking out either the many-cue referent (with ‘Republican’) or the one-cue referent (with ‘Democrat’). We predicted facilitated retrieval for the many-cue condition at the verb region (‘had voted for’), where ‘the senator’ must be understood as the object of the verb. Participants also completed the Author and Magazine Recognition Tests (ART/MRT; Stanovich & West, 1989), a measure of print experience and a proxy for world knowledge. Since high scorers may have greater experience accessing knowledge in semantic memory, we predicted that they might drive retrieval effects. Indeed, across two experiments, high scorers on the ART/MRT exhibited the predicted effect. Results are consistent with a framework in which conceptual and not just linguistic information directly impacts word retrieval and thereby sentence processing. At least in individuals with greater print exposure, perhaps indicative of greater knowledge, elaboration of conceptual information encoded throughout a discourse seems to facilitate sentence processing.


Elicitation of early negativity (EN) in sentence-processing contexts depends on attentional efficiency

Chris Barkley Department of Linguistics, UCSD

+ more

This study investigates the language-attention interface using early negativity (EN), elicited between 100-300 msec in sentence processing contexts (and commonly referred to as the “eLAN”), as the dependent measure. EN was first elicited in response to “word-category violations” (WCVs) of the type The man admired {a sketch/*Don’s} OF sketch the landscape (Neville et al., 1991). These responses were initially interpreted as an index of first-pass structure-building operations (Friderici 2002) but later reinterpreted as an index of low-level sensory form-based processing (Dikker, 2009). We hypothesized instead that EN is ontologically an attentional response, and therefore that the physical parameters of the EN should co-vary with measures of attentional efficiency. Under this view, the executive attention system is engaged as subjects monitor for ungrammaticality, orienting them to unexpected, task-relevant stimuli, resulting in selective attention to the WCV, enhanced sensory processing, and increases in the amplitude of the domain-general N100 response.

Here I report preliminary results from a sentence processing experiment including sentences with WCVs, filler sentences containing violations intended to elicit standard LAN, N400, and P600 effects, and an attention task designed to assess the efficiency of an individual’s alerting, orienting, and executive attention networks. Results of an attentional efficiency-based median split analysis of 36 subjects showed that the EN was elicited in only two groups: the low efficiency orienting and executive groups. In contrast, significant LAN, N400, and P600 were elicited in all groups and only differed minimally in their physical parameters.

These data suggest that EN effects may be mere attentional modulations of the N100. We hypothesize that while comprehenders with high-efficiency attentional systems possess adequate resources to accommodate WCVs, low-efficiency comprehenders must engage additional selective attentional resources in order to process WCVs, leading to enhancements of N100 amplitude. This finding highlights the importance of investigating cognitive systems beyond working memory in sentence processing contexts.


Computational Models for the Acquisition of Phonological Constraints

Gabriel R Doyle Department of Linguistics, UCSD

+ more

Phonology, whether approached from a rule-based or Optimality Theory viewpoint, relies on a set of rules or constraints that shape the sound patterns of a language. But where does this set come from? The most common, sometimes unstated, solution is to treat the set as innate and language-universal. This universality has some explanatory benefits, but it is a strong assumption, and one influenced largely by a lack of viable methods for learning constraints.

We propose two computational models for markedness constraint acquisition in an Optimality Theory framework. The first uses minimal phonological structure to learn a set of constraint violations that can be used to identify probable constraints. The second uses a similar learning structure but includes a basic grammar for constraints to jointly learn violations and the structure of these constraints. These methods, tested on Wolof vowel harmony and English plurals, learn systems of constraints that explain observed data equally well as the constraints in a standard phonological analysis, with a violation structure that largely corresponds with the standard constraints. These results suggest that phonological constraints are theoretically learnable, making phonological acquisition behavior the critical data point for deciding between theories with innate and learned constraints. This is joint work with Klinton Bicknell and Roger Levy.


How The Eyes Recognize Language: An Investigation in Sign Language Acquisition and Adult Language Processing

Rain Bosworth and So-One Hwang Department of Psychology and Center for Research in Language, UCSD

+ more

Newborn infants demonstrate an early bias for language signals, which contributes to their ability to acquire any of the world’s diverse range of spoken languages. One study found that young infants have an attentional bias for viewing sign language narratives over pantomimes (Krentz & Corina 2008). We recently replicated this finding with single signs and body grooming/action gestures. Thus, the human capacity to recognize linguistic input from birth may not arise simply from sensitivity to the acoustic properties of speech but to more general patterns that can be transmitted in either the spoken or signed modalities.

In this talk, we will describe a series of new experiments designed to investigate the following questions: How important is the temporally-encoded patterning of sign languages for 1) language recognition and acquisition in children, and 2) language processing among signing and non-signing adults? What are the gaze profiles of young infants exposed to sign language at home, and how does it compare with skilled adult signers? To study these questions, we created videos of natural signing, using both single-sign and narrative recordings, and novel non-linguistic stimuli by time-reversing these videos. We measured percent looking time and gaze trajectories for these “natural” and “reversed” stimuli, using a Tobii eyetracker because of its utility in testing across ages -- in infants, children, and adults. In addition to the eyetracking measures, we also obtained behavioral measures of intelligibility to better understand the impact of natural and unnatural temporal dynamics on language processing in the signed modality. Findings from this work may provide a useful tool in detecting early proficiency in language processing during infancy.


Are automatic conceptual cores the gold standard of semantic processing? The context-dependence of spatial meaning in grounded congruency effects

Larry Barsalou Department of Psychology, Emory University

+ more

According to grounded cognition, words whose semantics contain sensory-motor features activate sensory-motor simulations, which, in turn, interact with spatial responses to produce grounded congruency effects (e.g., processing the spatial feature of up for sky should be faster for up vs. down responses). Growing evidence shows these congruency effects do not always occur, suggesting instead that the grounded features in a word?s meaning do not become active automatically across contexts. Researchers sometimes use this as evidence that concepts are not grounded, further concluding that grounded information is peripheral to the amodal cores of concepts. We first review broad evidence that words do not have conceptual cores, and that even the most salient features in a word?s meaning are not activated automatically. Then, in three experiments, we provide further evidence that grounded congruency effects rely dynamically on context, with the central grounded features in a concept becoming active only when the current context makes them salient. Even when grounded features are central to a word?s meaning, their activation depends on task conditions.


Short-term memory for ASL fingerspelling and print

Zed Sevcikova School of speech, language, and hearing science, SDSU

+ more

This study investigates how printed and fingerspelled words are coded in short-term memory. Hearing readers recode print into a phonological code for short-term memory (STM), but evidence for phonological recoding in deaf readers has been mixed. It is unclear to what extent reading abilities or phonological awareness relate to the use of a phonological code in STM in deaf readers. In sign languages, orthography can be indirectly represented through fingerspelling. However, little is known about if fingerspelling is used as an additional code to store and rehearse printed words, or whether fingerspelled words are recoded into English. In this study, we investigated whether phonological and manual similarity affects word list recall when to-be-recalled items are presented as print or fingerspelling. 20 deaf ASL signers performed an immediate serial recall task with print stimuli and another 20 deaf ASL signers with fingerspelled stimuli. 20 hearing non-signers were included as a control group for printed words. All participants also completed a range of standardized reading and language assessments, including measures of spelling recognition, phonological awareness and reading comprehension. The stimuli were controlled for phonological similarity and for manual similarity. Deaf and hearing groups both displayed a phonological similarity effect for printed words. Interestingly, deaf readers also showed a phonological similarity effect for fingerspelling. We did not find evidence for a manual similarity effect for either printed words or fingerspelled words. These results suggest that in short-term rehearsal, ASL fingerspelling is quickly recoded into an English phonological code. I will further discuss these findings in the context of individual differences in phonological awareness, reading and language skills.


Remediation of abnormal visual motion processing significantly improves attention, reading fluency, and working memory in dyslexia

Teri Lawton Department of Computer Science and Engineering, UCSD

+ more

Temporal processing deficits resulting from sluggish magnocellular pathways in dorsal stream cortical areas have been shown to be a key factor limiting reading performance in dyslexics. To investigate the efficacy of reading interventions designed to improve temporal processing speed, we performed a randomized trial on 75 dyslexic second graders in six public elementary schools, comparing interventions targeting the temporal dynamics of either the auditory and/or visual pathways with the school’s regular reading intervention (control group). Standardized tests of reading fluency, attention, and working memory were used to evaluate improvements in cognitive function using ANCOVAs. Most dyslexics in this study had abnormal visual motion processing, having elevated contrast thresholds for movement-discrimination on a stationary, textured background. Visual movement-discrimination training to remediate abnormal motion processing significantly improved reading fluency (both speed and comprehension), attention, phonological processing, and auditory working memory, whereas auditory training to improve phonological processing did not significantly improve these skills. The significant improvements in phonological processing, and both sequential and nonsequential auditory working memory demonstrate that visual movement-discrimination training improves auditory skills even though it is training visual motion discrimination, suggesting that training early in the visual dorsal stream improved higher levels of processing in the dorsal stream, where convergence of both auditory and visual inputs in the parietal cortex have been found, suggesting that improving the timing and sensitivity of movement discrimination improves endogenous attention networks. These results implicate sluggish magnocellular pathways in dyslexia, and argue against the assumption that reading deficiencies in dyslexia are only phonologically-based.


Speed reading? You've gotta be Spritzin' me

Liz Schotter Department of Psychology, UCSD

+ more

Recently, web developers have spurred excitement around the prospect of achieving speed reading with apps that use RSVP (rapid serial visual presentation) to present words briefly and sequentially. They claim that reading in this way not only make the process faster, but also improves comprehension. In this talk, I will describe some findings from the field of reading research that contradict these claims. In particular, I will describe studies that suggest that the brain tightly controls the sequence and duration of access to information from words in sentences; therefore any piece of technology that takes away that control from the reader will impair the reading process to some degree.


Comprehension priming as rational expectation for repetition: Evidence from syntactic processing

Mark Myslin Department of Linguistics, UCSD

+ more

Why do comprehenders process repeated stimuli more rapidly than novel stimuli? The most influential hypotheses of these priming effects appeal to architectural constraints, stating that the processing of a stimulus leaves behind residual activation or strengthens its learned representation in memory. We propose an adaptive explanation: priming is a con- sequence of expectation for repetition due to rational adaptation to the environment. If occurrences of a stimulus cluster in time, given one occurrence it is rational to expect a second occurrence closely following. We test this account in the domain of structural priming in syntax, making use of the sentential complement-direct object (SC-DO) ambiguity. We first show that sentences containing SC continuations cluster in natural language, motivating an expectation for repetition of this structure. Second, we show that comprehenders are indeed sensitive to the syntactic clustering properties of their current environment. In a between- groups self-paced reading study, we find that participants who are exposed to clusters of SC sentences subsequently process repetitions of SC structure more rapidly than participants who are exposed to the same number of SCs spaced in time, and attribute the difference to the learned degree of expectation for repetition. We model this behavior through Bayesian belief update, showing that (the optimal degree of) sensitivity to clustering properties of syntactic structures is indeed learnable through experience. These results support an account in which comprehension priming effects are the result of rational expectation for repetition based on adaptation to the linguistic environment.


Hearing a Who: Preschoolers and Adults Process Language Talker-Contingently (Preview of an invited talk at CUNY 2014)

Sarah Creel UC San Diego

+ more

Listeners process sentences, but they also process people. Research in the past few decades indicate that a talker’s identity or (perceived) social group influences language processing at a variety of levels: phonological (e.g. Niedzielski, 1999), lexical (e.g. Goldinger, 1996), syntactic (Kamide, 2012), and discourse (Horton & Gerrig, 2005) levels.

Do these instances of talker specificity reflect small-scale flexibility of highly abstract language knowledge, or do they represent a crucial facet of language processing? I argue the latter. At least two critical elements of language processing are profoundly affected by talker identity. First is phonemic category extraction: listeners who are new to a language have difficulty generalizing speech sound and word recognition to new voices, and are aided by voice variability during learning (e.g. L1: Houston & Jusczyk, 2000; L2: Lively et al., 1993). Second are higher-level expectation effects in language processing, at the level of discourse processing and “talker-semantic” encoding. I will touch briefly on issues of phonemic category extraction and word encoding, but I will primarily discuss discourse and semantic aspects of talker identity, including my own research on the development of talker processing.

A variety of studies suggest that language is a powerful cue to social groups (Eckert, 2008). Knowing someone’s social group, or even their particular identity, influences on-line sentence processing. Adults in an ERP paradigm who heard identical sentences spoken either by a congruous or incongruous talker (e.g. adult vs. child saying “I want to drink the wine”) showed a larger N400 semantic mismatch negativity to the target word when the incongruous talker spoke the sentence (Van Berkum et al., 2008). In my own research, I have shown that preschool-aged children direct eye movements preferentially to shapes of the talker’s favorite color when that individual is talking (“Show me the circle”; Creel, 2012). In collaborative work (Borovsky & Creel, in press), 3-10-year olds, as well as adults, activated long-term knowledge about different individuals (e.g. pirates vs. princesses) based who spoke the sentence. Specifically, participants hearing a pirate say “I want to hold the sword” directed eye movements preferentially to a sword picture prior to word onset, despite the presence of other pirate-related (a ship) and holdable (a wand) pictures. This suggests that children can use voice information to identify individuals and activate knowledge that constrains sentence processing in real time. Finally, a new study in my lab suggests that preschool-aged children concurrently encode novel word-referent mappings and novel person-referent mappings.

The studies reviewed here suggest that listeners’ language apprehension is affected in real time by inferences of who is speaking. This is much more consistent with an interactive view of language processing than a modular view. Even quite young children appear to condition or contextualize their language input based upon who is saying it, suggesting that language acquisition itself is talker-contingent.


Studying the role of iconicity in the cultural evolution of communicative signals

Tessa Verhoef UC San Diego

+ more

When describing the unique combination of design features that make human languages different from other communication systems, Hockett (1960) listed 'arbitrariness' among them. However, modern knowledge about languages suggests that form-meaning mappings are less arbitrary than were previously assumed (Perniss et al. 2010). Especially sign languages, but also certain spoken languages (Dingemanse, 2012), are actually quite rich in iconic or motivated signals, in which there is a perceived resemblance between form and meaning. I will present two experiments to explore how iconic forms may emerge in a language, how arbitrariness or iconicity of forms relates to the affordances of the medium of communication, and how iconic forms interact and possibly compete with combinatorial sublexical structure. In these experiments, artificial languages with whistled words for novel objects were culturally transmitted in the laboratory. In the first experiment, participants learned an artificially generated whistled language and reproduced the sounds with the use of a slide whistle. Their reproductions were used as input for the next participant. Participants were assigned to two different conditions: one in which the use of iconic form-meaning mappings was possible, and one in which the use of iconic mappings was experimentally made impossible. The second experiment involved an iterated communication game. Pairs of participants were asked to communicate about a set of meanings using whistled signals. The meaning space was designed so that some meanings could be more easily paired with an iconic form while others were more difficult to map directly onto the medium of communication. Findings from both experiments suggest that iconic strategies can emerge in artificial whistled languages, but that iconicity can become degraded as well when forms change to become more consistent with emerging sound patterns. Iconicity seems more likely to persist and contribute to successful communication if it serves as a means for establishing systematic patterns.


Parallel language activation and inhibitory control in bimodal bilinguals

Marcel Giezen San Diego State University

+ more

Bilinguals non-selectively access word candidates from both languages during auditory word recognition. To manage such cross-linguistic competition, they appear to rely on cognitive inhibition skills. For instance, two recent studies with spoken language bilinguals found that individual differences in nonlinguistic conflict resolution abilities predicted language co-activation patterns. It has been suggested that the association between parallel language activation and performance on certain inhibitory control tasks reflects underlying similarities in cognitive mechanisms, more specifically, the processing of perceptual conflict. In the present study, we put this idea to the test by investigating the relationship between language co-activation and inhibitory control for bilinguals with two languages that do not perceptually compete, namely bimodal bilinguals.

Parallel language activation was examined with the visual world eye-tracking paradigm. ASL-English bilinguals’ eye movements were monitored as they listened to English words (e.g., “paper”) while looking at displays with four pictures including the target picture, a cross-linguistic phonological competitor (e.g., cheese; the ASL signs for cheese and paper only differ in their movement), and two unrelated pictures. Results showed that competitor activation during the early stages of word recognition correlated significantly with inhibition performance on a non-linguistic spatial Stroop task. Bilinguals with a smaller Stroop effect (indexing more efficient inhibition) exhibited fewer looks to ASL competitors.

Our results indicate that bimodal bilinguals recruit domain-general inhibitory control mechanisms to resolve cross-linguistic competition. Importantly, because spoken and sign languages do not have a shared phonology, this suggests that the role of inhibitory control in bilingual language comprehension is not limited to resolving perceptual competition at the phonological level, but also cross-linguistic competition that originates at the lexical and/or conceptual level. These findings will be discussed within current frameworks of bilingual word recognition and in light of the ongoing debate on bilingual advantages in cognitive control.


Fluid Construction Grammar

Luc Steels ICREA, Institute for Evolutionary Biology (UPF-CSIC), Barcelona; VUB AI Lab Brussels

+ more

Fluid Construction Grammar (FCG) is an operational computational formalism trying to capture key insights from construction grammar, cognitive linguistics and embodiment semantics. The central unit of description is a construction with a semantic and a syntactic pole. Constructions formulate constraints at any level of language (phonetics, phonology, morphology, syntax, semantics and pragmatics) and are applied using unification-style match and merge operations. FCG uses a semantics which is procedural and grounded in sensori-motor states. Flexible language processing and learning is implemented using a meta-level in which diagnostics detect anomalies or gaps and repair strategies try to cope with them, by ignoring ungrammaticalities or expanding the language system. FCG has been used chiefly as a research tool for investigating how grounded language can emerge in populations of robots.

This talk presents an overview of FCG and is illustrated with a live demo.

+ Steels, L. (2013) Fluid Construction Grammar. In Hoffmann, T. and G. Trousdale (ed.) (2012) Handbook of Construction Grammar. Oxford University Press, Oxford.

+ Steels, L. (ed.) (2011) Design Patterns in Fluid Construction Grammar. John Benjamins Pub. Amsterdam.


Olfactory language across cultures

Asifa Majid Radboud University Nijmegen

+ more

Plato proposed: “the varieties of smell have no name, and they have not many, or definite and simple kinds; but they are distinguished only as painful and pleasant”. This view pervades contemporary thought, and experimental data to date provides ample support. This has lead researchers to propose there must be a biological limitation for our inability to name smells. However, recent studies with two hunter-gatherer communities in the Malay Peninsula challenges this received wisdom. Jahai speakers, for example, were able to name odors with the same ease with which the named colors, unlike a matched English sample of participants who struggled to name familiar western odors. Moreover, Jahai speakers use a set of dedicated smell verbs to describe different smell qualities. Nothing comparable exists in today's conversational English. The Jahai are not the only group with such a smell lexicon. A related language, Maniq, also shows a sizeable smell lexicon, although the precise terms differ from those found in Jahai. The Maniq smell lexicon shows a coherent internal structure organised around two dimensions, pleasantness and dangerousness. Together these languages show that the poor codability of odors is not a necessary product of how the brain is wired, but rather a matter of cultural preoccupation.


Effects of literacy on children’s productions of complex sentences

Jessica L. Montag Indiana University

+ more

When people speak, they have many choices about how to say what they want to say. This largely unconscious process – of choosing words and sentence structures – is poorly understood. I will argue that we can begin to understand these production choices by understanding what is easy or difficult for speakers to produce. One aspect of this difficulty is the frequency with which a speaker has encountered or produced an utterance in the past. In my talk, I will be discussing a set of corpus analyses and production experiments with children and adults. I investigated how amount of language experience and emerging literacy affect production choices. These studies show how children gradually learn to identify alternative language forms from their linguistic environment, how the linguistic environment changes over time as children grow, and how children’s control over complex sentence structures continues to develop well after early stages of language learning.


Impact of language modality for gesturing and learning

So-One Hwang UC San Diego

+ more

Our research team has been investigating whether gesture can be reliably distinguished from language when they are expressed in the same modality by deaf signers, and whether gesture plays a role in problem-solving tasks as it does for young hearing children using co-speech gesture (Goldin-Meadow et al. 2012). Building upon the finding that gesture can be used to predict readiness to learn math equivalence among 9-12 year old deaf students, here we tested 5-8 year olds (n=33) on conservation knowledge. Piagetian conservation tasks involve comparisons of objects that are transformed in shape or configuration but not quantity. We asked the children to make judgments about objects’ quantities and asked them to explain their answers. Because young children often describe the appearance of objects in their explanations, we faced methodological challenges. In ASL, shapes and configurations are typically described using polycomponential forms called classifiers. Classifiers are described in the sign language research literature as being distinct from lexical signs but it is not clear whether they too are lexical, or have properties of gesture. Our results suggest 1) that lexical signs are like words in their ability to refer to abstract properties, and 2) that classifiers can be used flexibly as either lexical forms or gestural forms. The findings suggest that gesture can be beneficial in problem-solving contexts when it supplements rather than substitutes core linguistic formats for thinking.


Tarzan Jane Understand? A least-joint-effort account of constituent order change

Matt Hall University of Connecticut

+ more

All natural languages evolve devices to communicate who did what to whom. Elicited pantomime provides one model for studying this process, by providing a window into how humans (hearing non-signers) behave in a natural communicative modality (silent gesture) without established conventions from a grammar. In particular, we use this system to understand the cognitive pressures that might lead languages to shift from Subject-Object-Verb (SOV) toward Subject-Verb-Object (SVO): a pattern that is widely attested over both long and short timescales.

Previous research on production finds consistent preferences *for* SOV in "canonical" events (e.g. a woman pushing a box) but *against* SOV in "reversible" events (e.g. a woman pushing a man). Comprehenders, meanwhile, seem to have no objection to SOV for either type of event, suggesting that ambiguity-based accounts of the production data are unlikely. However, both production and comprehension have previously been tested in isolation. Here we ask whether SVO might emerge -for both reversible and canonical events- as a result of dynamic interaction between producers and comprehenders engaged in real-time communication.

Experiment 1 asked participants to describe both canonical and reversible events in gesture, in two conditions: interactive and solitary. In the interactive condition, two naive subjects took turns describing scenes to one another. In the solitary condition, one participant described the same scenes to a camera. In addition to replicating previous findings, results showed that SVO did increase more in the interactive condition than in the solitary condition- but only among reversible events. SVO also increased among the canonical events, but to the same extent in both interactive and solitary conditions. Experiment 2 ruled out the possibility that the SVO rise among canonical events simply reflects English recoding, and instead demonstrates that it depends on the presence of reversible events.

So why do languages shift toward SVO? The need to communicate about reversible events seems to be part of the answer, but the fact that canonical events also shift toward SVO may be due to production-internal mechanisms. Identifying these mechanisms is a target for future research.


A different approach to language evolution

Massimo Piattelli-Palmarini University of Arizona

+ more

For many authors, it is literally unthinkable that language as we know it cannot have evolved under the pressure of natural selection for communication, better thinking and social cohesion. The first model I will examine, showing its radical inadequacy, is, therefore, the adaptationist one. In our book, Jerry Fodor and I have tried to explain at some length (Fodor and Piattelli-Palmarini, "What Darwin Got Wrong" 2011) what is wrong quite generally with neo-Darwinian adaptationist explanations. But, even admitting, for the sake of the argument, that such explanations do apply to biological traits in general, I will concentrate on the specific defects of such explanations in the case of language. Syntax has not been shaped, as I will show, by communication or social cohesion. A second model I will criticize is one that conceptualizes language as an application of general cognitive traits, innate generic predispositions to categorize, extract statistical regularities from a variety of inputs, make inferences, learn from experience and assimilate the cultural norms of the surrounding community. The third model is based on general conditions of learnability and progressive simplification of the mental computations attributed to our mastering of language. Computer models of iterative learning, of the stepwise convergence of neural networks on simple solutions, and evolutionary considerations postulating the progressive shaping of language towards better learnability, will be examined and their implausibility explained. Finally, I will present a quite different model, still under development. It appears to be very promising and innovative and capable of re-configuring the entire issue of language evolution. Very recent data from several laboratories and several fields bring further implicit endorsement to this model. In essence, I will offer reasons to conclude that optimization constraints and rules of strict locality allow for some variability under the effects of external inputs, but this range of variation is quite limited, and concentrated in a relatively small fixed numbers of points, in conformity with what the linguistic model of Principles and Parameters suggested already 25 years ago.


Knowing too much and trying too hard: why adults struggle to learn certain aspects of language

Amy Finn Massachusetts Institute of Technology

+ more

Adults are worse than children when it comes to learning certain aspects of language. Why is this the case when adults are better than children on most other measures of learning, including almost every measure of executive function? While many factors contribute to this age-related learning difference, I will present work that shows that (1) linguistic knowledge, (2) mature, language-specific neural networks, and (3) mature cognitive function all contribute to these age-related differences in language learning outcomes.


The changing structure of everyday experience in the first two years of life

Caitlin Fausey Indiana University

+ more

Human experience may be construed as a stream - in time - of words and co-occurring visual events. How do the statistical and temporal properties of this stream engage learning mechanisms and potentially tune the developing system? In this talk, I will describe ongoing work designed to characterize 1) the changing rhythm of daily activity, 2) the changing visual availability of important social stimuli like faces and hands, and 3) the changing distributions of object instances with the same name. This ongoing research suggests that the statistical structure of the learning environment is dynamic and gated by young children's developmental level. The conjecture is that structure in everyday activities - at multiple timescales, and changing over the course of development - may drive change in the cognitive system.


Speak for Yourself: Simultaneous Learning of Words and Talkers’ Preferences

Sarah Creel

+ more

Language presents a complex learning problem: children must learn many word-meaning mappings, as well as abundant contextual information about words’ referents. Can children learn word-referent mappings while also learning context (individuals’ preferences for referents)? Three experiments (n=32 3-5-year-olds each) explored children’s ability to map similar-sounding novel words to referents while also learning talkers’ preferred referents. Both accuracy (assessing word learning) and moment-by-moment visual fixations (assessing talker preference knowledge) were recorded. Words were learned accurately throughout. When liker information (“I want” or “Anna wants”) occurred early in the sentence, children rapidly looked to the liker’s favorite picture. However, when liker information occurred after the target word, children used voice information, even if the speaker ended up naming the other character (“…for Anna”). When liker and talker were dissociated during learning (each talker labeled the other’s favorite), children showed no looking preferences. Results suggest sophisticated encoding of multiple cues during language development.


Signing in the Visual World: Effects of early experience on real-time processing of ASL signs

Amy Lieberman

+ more

Signed languages present a unique challenge for studying real-time lexical recognition, because the visual modality of sign requires the signer to interpret the linguistic and referential context simultaneously. Deaf individuals also vary widely in the timing and quality of initial language exposure. I will present a series of studies investigating real-time lexical recognition via eye-tracking in adult signers who varied in their age of initial exposure to sign language. Using a novel adaptation of the visual world paradigm, we measured the time course and accuracy of lexical recognition of ASL signs, and the effect of phonological and semantic competition on the time course of sign processing. I will discuss implications with regard to the impact of early experience on later linguistic processing skills.


Different strokes: gesture phrases in Z, a first generation family homesign.

John Haviland

+ more

In order not to prejudge the constituents and categories of "Z," an emerging sign language isolate in a single extended family including 3 deaf siblings in highland Chiapas, Mexico, where the surrounding spoken language is Tzotzil (Mayan), I try to apply in rigorous formal fashion a model of phrase structure derived from studies of "speaker's gestures" that accompany spoken language. I try to evaluate the virtues and potential vices of such a methodologically austere approach as applied to spontaneous, natural conversation in Z.


What do you know and when do you know it?

Ben Amsel

+ more

How is knowledge organized in memory? How do we access this knowledge? How quickly are different kinds of knowledge available following visual word perception? I'll present a series of experiments designed to advance our understanding of these questions. I'll show that the timing of semantic access varies substantially depending on the type of knowledge to be accessed, and that some kinds of information are accessed very rapidly. I'll demonstrate that different kinds of knowledge may be recruited flexibly to make specific decisions. I'll also present strong evidence that the neural processing systems subserving visual perception are directly involved in accessing knowledge about an object’s typical color. Taken together, these findings are most consistent with a flexible, fast, and at least partially grounded semantic memory system in the human brain.


Duck, Duck, ... Mallard: Advance Word Planning Facilitates Production of Dispreferred Alternatives

Dan Kleinman

+ more

Consider the spoken sentence “Dan fell asleep yesterday on the lab couch.” The speaker likely planned most of its semantic content prior to speech onset (e.g., deciding that the last word would refer to the piece of furniture in question). However, due to the attention-demanding nature of word selection, the speaker may not have selected the final word (“couch”, instead of the equally acceptable “sofa”) until shortly before it was uttered. This difference in automaticity means that, relative to a word produced in isolation, words produced in connected speech can be planned for longer prior to selection. How does this additional pre-selection planning affect the words that speakers choose to say?
I will present two experiments that tested the hypothesis that this extra time increases the accessibility of dispreferred responses. In each experiment, 100 subjects named critical pictures with multiple acceptable names (e.g., “couch”, used by 80% of subjects in a norming study, or “sofa”, used by 20%) under conditions that manipulated how long subjects could plan prior to speaking. In Experiment 1, pictures presented in a dual-task context elicited more dispreferred names (such as “sofa”) than pictures presented in a single-task context. In Experiment 2, pictures named at the end of a sentence (“The tent is above the sofa”) elicited more dispreferred names (at fast response latencies) than pictures named at the beginning of a sentence (“The couch is above the tent”).
These results indicate that when word selection is delayed, low-frequency responses have more time to become accessible and thus are produced more often. Because attentional bottlenecks in language production effectively delay the selection of most words during natural speech, the words we choose are influenced by our ability to plan them in advance.


Let’s take a look at light verbs: Relationships between syntax, semantics, and event conceptualization

Eva Wittenberg Institut für Germanistik, Potsdam University

+ more

Light verb constructions, such as "Julius is giving Olivia a kiss", create a mismatch at the syntax-semantics interface. Typically, each argument in a sentence corresponds to one semantic role, such as in "Julius gave Olivia a book", where Julius is the Source, Olivia the Goal, and the book the Theme. However, a light verb construction such as “Julius gave Olivia a kiss” with three arguments describes the same event as the transitive “Julius kissed Olivia” with two arguments: Julius is the Agent, and Olivia the Patient. This leads to several questions: First, how are light verb constructions such as "giving a kiss" processed differently from sentences such as "giving a book" ? Second, at which structural level of representation would we find sources of this difference? Third, what is the effect of using a light verb construction such as "giving a kiss" as opposed to "kissing" on the event representation created in a listener? I will present data from an ERP study, an eye-tracking study, and several behavioral studies to answer these questions.


Accessing Cross Language Categories in Learning a Third Language

Page Piccinini Department of Linguistics, UCSD

+ more

Current theories differ on how bilinguals organize their two languages, including their sound systems. The debate centers on whether bilinguals have constant access to both systems (Green, 1998; c.f. Johnson, 1997; Pierrehumbert, 2002) or to one system at a time (Cutler et al., 1992; Macnamara & Kushnir, 1971). This study examines these theories by testing the ability of early Spanish-English bilinguals to access distinctions within the voice onset time (VOT) continuum when learning a third language that uses VOT categories from both Spanish and English. Participants were tested on Eastern Armenian that has a three-way VOT contrast: negative, short-lag and long-lag VOT (cf. English which largely distinguishes short-lag from long-lag VOT and Spanish which contrasts negative and short-lag VOT). Participants were tested first with a production task followed by either an AX discrimination task or an ABX discrimination task. Of those who participated in the AX task half of participants received instructions in English and half received instructions in Spanish; of those who participated in the ABX task all received instructions in Spanish. Language dominance was also assessed via a questionnaire to see how being dominant in one language over another could affect production and perception of the three-way contrast. For the production experiment there was a significant difference in VOT durations between all three VOT categories. However there was a significant interaction with language dominance, whereby only balanced bilinguals could reliably produce the negative VOT category as compared to English dominant bilinguals. There was no effect of language of instruction. For the AX discrimination task participants were significantly above chance for discriminating negative VOT from long-lag VOT, significantly below chance at discriminating negative VOT from short-lag VOT, and at chance at discriminating short-lag VOT from long-lag VOT. There was no significant effect of either language of instruction or language dominance. Preliminary results from the ABX discrimination task suggest bilinguals can accurately discriminate all three contrasts. There was a marginally significant effect of language dominance with balanced bilinguals doing better at negative VOT compared to short-lag VOT than English dominant bilinguals. These results suggest that in production early Spanish-English bilinguals can reliably produce the three-way contrast, but only if they are balanced in both languages. In perception early Spanish-English bilinguals are able to discriminate the three-way contrast, as shown by the ABX discrimination task, especially if they are more balanced. However early Spanish-English bilinguals, both balanced and English dominant, have a preference for languages to only have a two-way contrast, as shown by the AX discrimination task. Overall these results support a theory whereby bilinguals have access to sounds from both of their languages at once, particularly if they are balanced bilinguals.


The (un)automaticity of structural alignment

Iva Ivanova UCSD Psychology Department

+ more

Interlocutors in a dialogue often mirror each other’s linguistic choices at different levels of linguistic representation (interactive alignment), which facilitates conversation and promotes rapport (Pickering & Garrod, 2004). However, speakers frequently engage in concurrent activities while in dialogue such as typing, reading or listening to programs. Is interactive alignment affected by concurrent participation in other activities that pose demands on working memory? In this talk, I will focus on alignment of structure, which happens as a result of structural priming (Branigan et al., 2000; Jaeger & Snider, 2013). Specifically, I will present three experiments investigating whether structural priming is affected by verbal working memory load. As a whole, the findings suggest that concurrent verbal working memory load may disrupt structural alignment at a (potentially) conceptual but not at a syntactic level of structural processing. Practically, they imply, however, that one might align less to one’s interlocutor if simultaneously scanning Facebook updates.
Please note that this is a version of a talk I presented at CUNY this year.


Ups and downs in auditory development: Preschoolers discriminate contour but fall flat on audiovisual mapping

Sarah Creel Cognitive Science Department, UCSD

+ more

How do children hear the world? Previous research suggests that even infants are sensitive to pitch contour—the ups and downs in a periodic acoustic source. Contour sensitivity is presumed to form the basis for later perception of culture-specific musical patterns (e.g. the Western major scale), and for apprehending musical metaphors (“rising” pitches are upward motion). The current study shows that 4-5-year-old children, while they reliably distinguish contour differences, cannot use contour differences in an audiovisual mapping task. This is not due to general difficulty in associating nonspeech sounds with images. Results call into question the primacy of contour as a dimension of musical representation. Further, results mirror a phenomenon previously observed in word learning (Stager & Werker, 1997), wherein highly-discriminable percepts are difficult for children to associate with visual referents. Thus, difficulty in mapping similar-sounding words to referents may reflect more general difficulty in auditory-visual association learning, likely due to memory interference.
FYI: This is a version of a talk I have given in COGS 200 and the Psychology Cognitive Brownbag.


Investigating the relations among components of language in typically developing children and children with neurodevelopmental disorders

Lara Polse Joint Doctoral Program, SDSU & UCSD

+ more

Language is a complex multifaceted system, and as we use spoken and written language we simultaneously recruit an array of interrelated linguistic subsystems. While these subsystems have been studied extensively during language acquisition, we know little about the organization and relations among these components in the school-age years. In this talk, I will present four investigations in which I use classically defined components of language (phonological, lexico-semantic, and syntactic) as well as components of reading (orthographic and semantic) as a tool to explore the relations amongst elements that comprise the language system in school aged typically developing children and children with neurodevelopmental disorders (aged 7-12). Investigating the composition of the language system in children with neurodevelopmental disorders that affect language will not only help to create more targeted interventions for these children, but will also provide a unique window through which to better understand the underlying structure and organization of language in typically developing children.


Meaning Construction in the Embodied and Embedded Mind

Seana Coulson Cognitive Science Department, UCSD

+ more

In classical cognitive science, the body was merely a container for the physical symbol system that comprised the mind. Today, the body plays an increasingly important role in cognitive accounts as next generation cognitive scientists explore the idea that knowledge structures exploit partial reactivations of perceptual, motoric, and affective brain systems. First, the state of one’s own body might affect the way we understand other people’s emotional states as well as language about emotional events. Second, we might observe how other people move their bodies during speech in order to better understand their meaning. Third, we might attend to the way in which speakers’ gestures coordinate internal mental processes with external cultural inscriptions. Accordingly, I describe a series of behavioral and electrophysiological studies that address the real time comprehension of emotional language, iconic gestures in discourse about concrete objects and events, and environmentally coupled gestures in children’s discourse about mathematics.


Semantic Preview Benefit in Reading: Type of Semantic Relationship Matters

Liz Schotter Psychology Department, UCSD

+ more

Reading is efficient because of the ability to start processing upcoming words before they are fixated (see Schotter, Angele, & Rayner, 2012 for a review). To demonstrate preprocessing of upcoming words, researchers use the gaze-contingent boundary paradigm (Rayner, 1975) in which a preview word changes to a target word during the saccade to it (using eye trackers to monitor fixation location and duration). Reading time measures on the target are compared between various related preview conditions and an unrelated control condition. Faster processing in a related condition compared to the unrelated condition suggests preview benefit—that information was obtained from the preview word parafoveally and used to facilitate processing of the target once it is fixated. While preprocessing of upcoming words at the orthographic and phonological is not controversial (i.e., is well-documented and accounted for in many models of reading), semantic preprocessing of upcoming words is debated (i.e., has mixed support in the literature and whether or not there is such an effect has been suggested as a means to distinguish between the two most prominent models of reading: E-Z Reader (e.g., Reichle, Pollatsek, Fisher & Rayner, 1998) and SWIFT (e.g., Engbert, Longtin, & Kliegl, 2002)). In this talk, I present two studies using the gaze-contingent boundary paradigm, demonstrating semantic preview benefit in English when the preview and target are synonyms, but not when they are semantically related, but not synonymous. I argue that the type of semantic relationship shared between the preview and target has a strong influence on the magnitude of preview benefit and I discuss this finding in relation to prior studies finding semantic preview benefit (in German and Chinese) and not finding it (in English).


A dynamic view of language production

Gary Oppenheim Center for Research in Language, UCSD

+ more

In searching to understand how language production mechanisms work in the moment, we often forget how adaptable they are. In this talk, I'll present a high-level overview of some work that explores this adaptability on two timescales. The first part will focus on speakers' ability to take a system developed for communication and use it (perhaps predominantly) as a tool for thought: inner speech. Here I'll revisit Watson's (1913) claim that, "thought processes are really motor habits in the larynx." Then I'll consider adaptation on a longer timescale, with the idea that speakers achieve fluent production by continually re-optimizing their vocabularies with every word retrieval throughout their lives. Here I'll show that a simple incremental learning model naturally explains and predicts an array of empirical findings that our static models have struggled to explain for decades.

Note: This will be a rehearsal for an open-specialization faculty job talk that I'll present at Bangor University (Wales) on May 2. My goal is to polish it into the best 30-minute talk ever, so I would very much appreciate any constructive criticism.


Experimental evidence for a mimesis-combinatoriality tradeoff in communication systems

Gareth Roberts Yeshiva University

+ more

Sign languages tend to represent the world less arbitrarily than spoken languages, exploiting a much richer capacity for mimesis in the manual modality. Another difference between spoken and signed languages concerns combinatoriality. Spoken languages are highly combinatorial, recombining a few basic forms to express an infinite number of meanings. While sign languages exhibit combinatoriality too, they employ a greater number of basic forms. These two differences may be intimately connected: The less a communication system mimics the world, the greater its combinatoriality. We tested this hypothesis by studying novel human communication systems in the laboratory. In particular we manipulated the opportunity for mimesis in these systems and measured their combinatoriality. As predicted we found that combinatoriality was greater when there was less opportunity for mimesis and, furthermore, that mimesis provided scaffolding for the construction of communication systems.


Abstract knowledge vs direct experience in linguistic processing

Emily Morgan UCSD, Linguistics Dept.

+ more

Abstract linguistic knowledge allows us to understand novel expressions which we have never heard before. It remains an outstanding question, however, what role this abstract linguistic knowledge plays in determining processing difficulty for expressions that are _not_ novel--those with which the speaker has had direct experience. We investigate this in the case of "binomial expressions" of the form "X and Y". Many common binomial expressions have a preferred order (e.g. "bride and groom" vs "groom and bride"). These ordering preferences are predictable from a small number of linguistic factors. Alternately, preferences for commonly attested binomial expressions could be attributed to the frequency of speakers' direct experience with these expressions. Using a combination of probabilistic modeling and human behavioral experiments, we investigate the roles of both abstract linguistic constraints and direct experience in the processing of binomial expressions.


Grounding speech with gaze in dynamic virtual environments

Matthew Crocker Saarland University, Germany

+ more

The interactive nature of dialogue entails that interlocutors are constantly anticipating what will be said next and speakers are monitoring the effects of their utterances on listeners. Gaze is an important cue in this task, potentially providing listeners with information about the speaker's next referent (Hanna & Brennan, 2007) and offering speakers some indication about whether listeners correctly resolved their references (Clark & Krych, 2004).
In this talk, I will first review some recent findings that quantify the benefits of speaker gaze (using a virtual agent) for human listeners. I will then present a new study which demonstrates that a model of speech generation that exploits real-time listener gaze – and gives appropriate feedback – enhances reference resolution by the listener: In a 3D virtual environment, users followed spoken directional instructions, including pressing a number of buttons that were identified using referring expression generated by the system (see GIVE; Koller et al., 2010). Gaze to the intended referent following a referring expression was taken as evidence of successful understanding and elicited positive feedback; by contrast, gaze to other objects triggered early negative feedback. We compared this eye movement-based feedback strategy with two baseline systems, revealing that the eye-movement based feedback leads to significantly better task performance the other two strategies, as revealed by a number of measure. From a methodological perspective, our findings more generally show that real-time listener gaze immediately following a referring expression reliably indicates how a listener resolved the expression, even in dynamic, task-centered, visually complex environments.


Regularization behavior in a non-linguistic domain

Vanessa Ferdinand University of Edinburgh

+ more

Language learners tend to regularize variable input and some claim that this is due to a language-specific regularization bias. I will present the results of two frequency learning experiments in a non-linguistic domain and show that task demands modulate regularization behavior. When participants track multiple frequencies concurrently, they partially regularize their responses, and when there is just one frequency to track, they probability match from their input data. These results will be compared to matched experiments in the linguistic domain, and some pilot results will be presented. The goal here is to partial out the regularization behavior related to task demands (such as memory limitations), and that which may be due to domain-specific expectations of one-to-one mappings between variants and objects. A Bayesian model is fit to the experimental data to quantify regularization biases across experiments and explore the long-term cultural evolutionary dynamics of regularization and probability matching in relation to a null model, drift.


Combinatorial structure and iconicity in artificial whistled languages

Tessa Verhoef University of Amsterdam - ACLC

+ more

Duality of patterning, one of Hockett's (1960) basic design features of language, has recently received increased attention (de Boer, Sandler, & Kirby, 2012). This feature describes how, in speech, a limited number of meaningless sounds are combined into meaningful words and those meaningful words are combined into larger constructs. How this feature emerged in language is currently still a matter of debate, but it is increasingly being studied with the use of a variety of different techniques, including laboratory experiments. I will present a new experiment in which artificial languages with whistle words for novel objects are culturally transmitted in the laboratory. The aim of this study is to extend an earlier study in which it was shown that combinatorial structure emerged in sets of meaningless whistles through cultural evolution. In the new study meanings are attached to the whistle words and this further investigates the origins and evolution of combinatorial structure. Participants learned the whistled language and reproduced the sounds with the use of a slide whistle. Their reproductions were used as input for the next participant. Two conditions were studied: one in which the use of iconic form-meaning mappings was possible and one in which the use of iconic mappings was experimentally made impossible, so that we could investigate the influence of iconicity on the emergence of structure.


On defining image schemas

Jean Mandler

+ more

There are three different kinds of cognitive structure that have not been differentiated in the cognitive linguistic literature. They are spatial primitives, image schemas, and schematic integrations. Spatial primitives are the first conceptual building blocks formed in infancy, image schemas are simple spatial stories built from them, and schematic integrations use the first two types to build concepts that include nonspatial elements, such as force and emotion. These different kinds of structure have all come under the umbrella term of image schemas. However, they differ in their content, developmental origin, imageability, and role in meaning construction.


Explaining "I can't draw": Parallels in the structure and development of drawing and language

Neil Cohn

+ more

Why is it that many people feel that they "can't draw"? Both drawing and language are fundamental and unique to humans as a species. Just as language is a representational system that uses systematic sounds (or manual/bodily signs) to express concepts, drawing is a means of graphically expressing concepts. Yet, unlike language, we consider it normal for people not to learn to draw, and consider those who do to be exceptional. I argue that the structure and development of drawing are indeed analogous to that of language, and that most people who "can't draw" have a drawing system parallel with the resilient systems of language that appear when children are not exposed to a linguistic system within a critical developmental period (such as "homesign").


Reasoning with Diagrams in Chronobiology

William Bechtel

+ more

Diagrams are widely used to communicate in biology. But what other functions do they play? I will argue that they are often the vehicles of reasoning, both for individuals and collectives. They serve to characterize and conceptualize the phenomenon to be explained. The construction and revision of diagrams is central to the activities of proposing and revising mechanistic explanations of the phenomenon. To illustrate these roles, I will focus on research on circadian rhythms, endogenously generated rhythms of approximately 24-hours that regulate a large range of biological phenomena across all orders of life. Visual representations are crucial to understanding the periodicity and entrainment of these oscillations and to reasoning about the complex interacting feedback mechanisms proposed to explain them.


Building Meanings: The Computations of the Composing Brain

Liina Pylkkänen New York University

+ more

Although the combinatory potential of language is in many ways its defining characteristic, our understanding of the neurobiology of composition is still grossly generic: research on the brain bases of syntax and semantics implicates a general network of “sentence processing regions” but the computational details of this system have not been uncovered. For language production, not even a general network has yet been delineated. Consequently, the following two questions are among the most pressing for current cognitive neuroscience research on language:
(i) What is the division of labor among the various brain regions that respond to the presence of complex syntax and semantics in comprehension? What are the computational details of this network?
(ii) How does the brain accomplish the construction of complex structure and meaning in production? How do these processes relate to parallel computations in comprehension?
In our research using magnetoencephalography (MEG), we have systematically varied the properties of composition to investigate the computational roles and spatiotemporal dynamics of the various brain regions participating in the construction of complex meaning. The combinatory network as implicated by our research comprises at least of an early (~200-300ms), computationally specialized contribution of the left anterior temporal lobe (LATL) followed by later and more general functions in the ventromedial prefrontal cortex (vmPFC) and the angular gyrus (AG). The same regions appear to operate during production but in reverse order. In sum, contrary to hypotheses that treat natural language composition as monolithic and localized to a single region, the picture emerging from our work suggests that composition is achieved by a network of regions which vary in their computational specificity and domain generality.


Complexity is not Noise: Using Redundancy and Complementarity in the Input to Simplify Learning

Jon A. Willits Indiana University

+ more

Language acquisition has often been cast as an enormously difficult problem, requiring innate knowledge or very strong constraints for guiding learning. I will argue that this alleged difficulty arises from a mischaracterization of the learning problem, whereby it is assumed (implicitly, at least) that language learners are solving a set of independent problems (e.g. word segmentation, word-referent mappings, syntactic structure). In fact, these problems are not independent, and children are learning them all at the same time. But rather than this making language acquisition even more difficult, these interactions immensely simplify the learning problem, by allowing children to take what they have learned in one domain and use it to immediately constrain learning in others. In this talk, I will focus on interactions between the lexicon and syntactic structure, and discuss corpus analyses, computational models, and behavioral experiments with infants and adults. These studies will demonstrate how redundancy and complementarity in the input help children and adults solve a number of learning and comprehension problems, such as learning syntactic nonadjacent dependencies via semantic bootstrapping, and dealing with interactions between semantic and syntactic structure in language processing.


Learnability of complex phonological interactions: an artificial language learning experiment

Mike Brooks, Bozena Pajak, and Eric Bakovic

+ more

What inferences do learners make based on partial language data? We investigated whether exposure to independent phonological processes in a novel language would lead learners to infer their interaction in the absence of any direct evidence in the data. Participants learned to form compounds in an artificial language exhibiting independently-triggered phonological processes, but the potential interaction between them was withheld from training. Unlike control participants trained on a near-identical language without this potential, test participants rated critical items exhibiting the interaction as significantly more well-formed than control items, suggesting that they were able to generalize beyond the observed language properties.


Mapping linguistic input onto real-world knowledge in online language comprehension

Ross Metusalem

+ more

Comprehending language involves mapping linguistic input onto knowledge in long term memory. This talk will discuss two studies, one complete and one at its outset, investigating this mapping as it occurs during incremental comprehension. Specifically, the studies examine the activation of unstated knowledge regarding described real-world events. The talk will begin by briefly discussing an ERP study finding that the N400 elicited by a locally anomalous word (e.g., They built a jacket in the front yard) is reduced when that word is generally associated with the described event (Metusalem, Kutas, Urbach, Hare, McRae, & Elman, 2012). This is taken to indicate that online comprehension involves activation of unstated knowledge beyond that which would constitute a coherent continuation of the linguistic input. The talk will then turn to an upcoming study that will utilize both Visual World eye-tracking and ERP experiments to probe knowledge activation as a discourse unfolds through time, with the aim of addressing specific issues regarding how linguistic input dynamically modulates knowledge activation during online comprehension.


Much ado about not(hing)

Simone Gieselman

+ more

Negative sentences such as Socrates didn't like Plato are thought to come with a large processing cost in comparison to their corresponding positive counterparts, such as Socrates liked Plato. This is reflected in longer reading and reaction times, higher error rates, larger brain responses and greater cortical activation for negative versus positive sentences. From the perspective of everyday language use, this is surprising, because we use negation frequently, and mostly with apparent ease. Many studies have attempted to shed light on the reason for the processing cost of negation but so far, the "negation puzzle" hasn't been solved.

In this talk, I present a series of reading-time studies showing that if we control the context of positive and negative sentences in a clear and precise way, we can manipulate whether the "negation effect" appears or not. On the basis of these results, I argue that negative sentences generally aren't harder to process than positive sentences. Depending on the context of an utterance, negative sentences may be less informative than positive sentences (the opposite may also be true) and thus require additional inferential processing on the part of the comprehender to understand what the intended world is like. I argue that these additional inferential processes have previously been conflated with an inherent processing cost of negation.


Storage and computation in syntax: Evidence from sentence production priming studies

Melissa Troyer

+ more

In morphology, researchers have provided compelling evidence for the storage of compositional structures that could otherwise be computed by rule. In syntax, evidence of storage of fully compositional structures has been less forthcoming. We approach this question using syntactic priming, a method exploiting the tendency of individuals to repeat recently produced syntactic structures. We investigate relative clauses (RCs), which are syntactically complex but are nevertheless frequent in natural language. Across three experiments, we observe that priming of object-extracted RCs is sensitive to a) the type of noun phrase in the embedded subject position (a full NP vs. a pronoun), and b) the type of relative pronoun (who vs. that). This suggests that the representations of some types of RCs involve storage of large units that include both syntactic and lexical information. We interpret these results as supporting models of syntax that allow for complex mixtures of stored items and computation.


All in my mind: language production, speech errors, and aging

Trevor Harley University of Dundee

+ more

What happens to language skills as we age? In particular, what happens to the skills that enable us to manipulate our own language processes? I present data from several studies on changes in phonological awareness in normal and pathological aging (mainly concerning individuals with Parkinson's disease). I relate the results to model of lexical access in speech production and of the executive control of language. I also discuss the nature of a general phonological deficit and how aging can mimic its effects. Primarily though I ask: what is wrong with my language production?


Impossible to Ignore: Phonological Inconsistency Slows Vocabulary Learning

Sarah Creel

+ more

Though recent work examines how language learners deal with morphosyntactic input inconsistency, few studies explore learning under phonological inconsistency. The predominant picture of phonological acquisition is that young learners encode native-language speech sound distributions, and these distributions--phonemes--then guide lexical acquisition. Yet most children’s phonological experiences, even within a language, contain variability due to regional dialect variation, L2 speakers, and casual speech, potentially generating seemingly-different phonological realizations of the same word. Do learners merge variant word forms, or store each variant separately? To distinguish between these possibilities, children (ages 3-5) and adults learned words with or without phonological inconsistency. Both children and adults showed increased difficulty when learning phonologically inconsistent words, suggesting they do not merge speech-sound category variability. Data are more consistent with learning separate forms, one per accent, though this appears easier than learning two completely-different words. Ongoing work explores real-world accent variation.


Why do your lips move when you think?

Gary Oppenheim

+ more

When you imagine your own speech, do you think in terms of the motor movements that you would use to express your speech aloud (e.g. Watson's 1913 proposal that "thought processes are really motor habits in the larynx"), or might this imagery represent more abstract phonemes or words? Inner speech is central to human experience, often stands in for overt speech in laboratory experiments, and has been implicated in many psychological disorders, but it is not very well understood. In one line of work (Oppenheim & Dell, 2008; 2010; Oppenheim, 2012; in press; Dell & Oppenheim, submitted), I have examined phonological encoding in inner speech, trying to identify the form of the little voice in your head. Here I've developed a protocol to examine levels of representation in inner speech by comparing distributions of self-reported errors in inner speech to those in overt speech, and used both process (neural network) and analytical (multinomial processing tree) models to relate the differences in error patterns to differences in the underlying processes. The work indicates that inner speech represents a relatively abstract phoneme level of speech planning (Oppenheim & Dell, 2008), but is flexible enough to incorporate further articulatory information when that becomes available (Oppenheim & Dell, 2010). For example, silently mouthing a tongue-twister leads one to 'hear' different errors in their inner speech. Aside from addressing the initial questions about inner speech, this work has constrained theories of self-monitoring in overt speech production (Oppenheim & Dell, 2010; Oppenheim, in press) and provided crucial evidence for the role of abstract segmental representations (Dell & Oppenheim, submitted).

This talk will primarily focus on the empirical work, but I can address additional issues as time and interest allows. For instance, recent challenges to our 2008 claims (e.g. from Corley, Brocklehurst, & Moat, 2011), though overstated, have inspired a more general account of the relationship between error rates and 'good' error effects that is backed by both computational modeling and empirical data (Oppenheim, 2012; Dell & Oppenheim, submitted): because speech errors are over-determined error effects tend to be stronger (as odds ratios) when production is more accurate, but the resultantly rare errors may provide less statistical power to detect error effects.


The Grammar of Visual Narratives: Structure, Meaning, and Constituency in Comics

Neil Cohn

+ more

Comics are a ubiquitous form of visual narrative in contemporary society. I will argue that, just as syntax allows us to differentiate coherent sentences from scrambled strings of words, the comprehension of sequential images in comics also uses a grammatical system to distinguish coherent narrative sequences from random strings of images. First, I will present a theoretical model of the narrative grammar underlying comics—a hierarchic system of constituent structure that constrains the sequences of images. I then will provide an overview of recent research that supports the psychological validity of this grammar, using methods from psycholinguistics and cognitive neuroscience. In particular, I will emphasize that neurophysiological responses that appear to violations of syntax and semantics in sentences appear to violations of narrative and semantics in the sequential images of comics. Finally, I consider what ramifications a narrative grammar of sequential images has on theories of verbal narrative and language in general.


The impact of language and music experience on talker identification

Micah Bregman

+ more

Speech is typically studied for its role transmitting meaning through words and syntax, but it also provides rich cues to talker identity. Acoustic correlates of talker identity are intermingled with speech sound information, making talker recognition a potentially difficult perceptual learning problem. We know little about how listeners accomplish talker recognition, though several previous studies suggest a role for language familiarity and phonological processing. In this talk, I will present the results of a recent study with professor Sarah Creel where we asked whether bilingual and monolingual listeners learned voices more rapidly as a function of language familiarity and age of acquisition. We observed an interaction with language background: Korean-English bilinguals learned to recognize Korean talkers more rapidly than they learned English talkers, while English-only participants learned English talkers faster than they learned Korean talkers. Further, bilinguals' learning speed for talkers in their second language (English) correlated with how early they began learning English. Individuals with extensive musical experience learned to recognize voices in their non-dominant language faster than those with less musical experience. Taken together, these results suggest that individual differences in language experience and differences in auditory experience (or ability) affect talker encoding.


Using hands and eyes to investigate conceptual representations: Effects of spatial grouping and event sequences on language production

Elsi Kaiser Department of Linguistics, University of Southern California

+ more

In this talk, I present some of our recent work investigating how the human mind represents (i) relations between events in different domains (using priming to probe effects of motor actions on discourse-level representations) and (ii) relations between objects in different domains (effects of grouping in the visual domain and in language, on the prosodic level). Segmenting stimuli into events and understanding the relations between those events is crucial for understanding the world. For example, on the linguistic level, successful language use requires the ability to recognize semantic coherence relations between events (e.g. causality, similarity). However, relatively little is known about the mental representation of discourse structure. I will present experiments investigating whether speakers’ choices about event-structure and coherence relations between clauses are influenced by semantic relations represented by preceding motor actions (especially causality), and the event-structure of such motor-action sequences. These studies used a priming paradigm, where participants repeated a motor action modeled by the experimenter (e.g. roll a ball towards mini bowling pins to knock them over), and then completed an unrelated sentence-continuation task. In addition, I will investigate the question of cross-domain representations from another angle: I will present a study that investigates the relation between abstract relations in the domain of prosody (prosody grouping and prosodic boundaries) and relations in the visual domain (grouping objects). As a whole, our findings provide new information about the domain-specificity vs. domain-generality of different kinds of representations. In the domain of events, our findings point to the existence of structured representations which encode fine-grained details as well as information about broader connections between classes of coherence relations, and suggest that motor actions can activate richly-encoded representations that overlap with discourse-level aspects of language. In the visual domain, our findings suggest that linguistic and visual representations interface at abstract level, reflecting cognitive structuring, not the detailed physical dimensions of either speech or visual information.


Evolving The Direct Path In Praxis As A Bridge To Duality Of Patterning In Language

Michael Arbib USC Neuroscience

+ more

We advance the Mirror System Hypothesis (Arbib, 2012: How the Brain Got Language: The Mirror System Hypothesis. Oxford University Press) by offering a new neurologically grounded theory of duality of patterning in praxis and show how it serves complex imitation and provides an evolutionary basis for duality of patterning in language.


Sequential vs. hierarchical models of human incremental sentence processing

Victoria Fossum

+ more

Experimental evidence demonstrates that syntactic structure predicts observed reading times during human incremental sentence processing, above and beyond what can be accounted for by word-level factors alone. Despite this evidence, open questions remain: which type of syntactic structure best explains observed reading times-–hierarchical or sequential, and lexicalized or unlexicalized? One previous study found that lexicalizing syntactic models does not improve prediction accuracy. Another more recent study found that sequential models predict reading times better than hierarchical models, and concluded that the human parser is insensitive to hierarchical syntactic structure. We investigate these claims, and find a picture more complicated than the one presented by previous studies. Our findings show that lexicalization does improve reading time prediction accuracy after all, and that the claim that the human parser is insensitive to hierarchical syntactic structure is premature.


Workshop: Writing papers for publication

Victor Ferreira

+ more

Yes, it's true: you're about to enter the ranks of the elite paper submitter. In this workshop, Vic (with a little help from his friends) will be sharing thoughts about strategies for getting your research published. Don't miss it!


Not Lost in Translation: Learning about words in a sea of sound

Suzanne Curtin Department of Psychology, University of Calgary

+ more

Learning about words is one of the key building blocks of language acquisition. To learn a new word, infants begin by identifying the relevant sound pattern in the speech stream. Next, they encode a sound representation of the word, and then establish a mapping between the word and a referent in the environment. Despite the apparent complexity of this task, infants move from a productive vocabulary of about 6 words at 12 months to a vocabulary of over 300 words by 24 months. In this talk I will discuss some of the ways in which young infants use the phonological information in the speech signal to map words to meaning. Specifically, I will present research exploring how knowledge of the sound system established over the first year of life influences infants’ mapping of words to objects and events.


How the self-domestication hypothesis can help to solve language evolution puzzles

Robert Kluender

+ more

Most proposals for the evolution of language center around the so-called discontinuity paradox: while human language has to have come from somewhere evolutionarily (i.e. ought to be reconstructably continuous with the communicative behavior of other species), it nonetheless appears to exhibit sharp qualitative differences (i.e. discontinuities) from other known systems of animal communication. Typically, those uncomfortable with the notion of human language as an evolutionary accident or "spandrel" have been forced to adopt a gradualist, continuous view of language evolution, a rather difficult position to defend given the mass extinction of -- and consequent absence/paucity of relevant evidence from -- all other known hominin species.

Recently, much attention has been paid to the surprising yet consistent morphological and behavioral discontinuities that emerge in various unrelated species under human intervention via domestication, and by hypothesis in certain wild species under proposed processes of "self-domestication". In this talk I review these separate proposals and juxtapose them in a way that reveals a number of appealing solutions to long-standing, thorny conceptual problems in the evolution of language. Aside from obvious implications for human socialization and enculturation, I argue that self-domestication in the hominin lineage could help to account for not only the otherwise mysterious descent of the larynx, but also precisely for those puzzling facts that modern, discontinuous views of language arose to address in the first place: namely, the "overcomplexity" of human language (Saussure) and the ease with which it is acquired at remarkably early stages of human development, when cognitive ability is otherwise severely limited (so-called critical period effects).


ERPs for Gender Processing in French adults and children: task and age effects

Phaedra Royle École d'orthophonie et d'audiologie, Université de Montréal

+ more

This talk presents the first study of auditory gender processing in French using ERPs. In order to study the development of gender agreement processing in French children, we developed an ERP study using an auditory-visual sentence-picture matching paradigm for French noun-phrase (DP) internal agreement. This is an aspect of French that can be difficult to master, due to the idiosyncrasy of gender marking, which has also proven difficult for children with language impairment. We used the ERP paradigm in order to tap into ongoing language processing while obviating the use of grammaticality judgment, which can be difficult to use with young children. A first study established the paradigm with adult data, while controlling for task effects. A second study piloted the paradigm with children.


Cross-Modal Mechanisms in Situated Language Processing

Moreno Coco School of Informatics, University of Edinburgh

+ more

Most everyday tasks require different cognitive processes to exchange, share, and synchronize multi-modal information. In my eye-tracking research, I focus on the mechanisms underlying the synchronous processing of visual and linguistic information during language production tasks, such as object naming or image description in photo-realistic scenes.

In this talk, I first discuss the interplay between low level (e.g., visual saliency) and high level information (e.g., contextual congruency) during object naming. Then, I move to the more complex linguistic task of scene description. In contrast to the previous literature, my results show the co-existence of three components of visual guidance (perceptual, conceptual, and structural) which interact with sentence processing. Based on this finding, I outline a novel approach to quantifying the cross-modal similarity of visual and linguistic processing. In particular, I demonstrate that the similarity between visual scan patterns correlates with the similarity between sentences, and that this correlation can be exploited to predict sentence productions based on associated scan patterns.


From Shared Attention to Shared Language: Results From a Longitudinal Investigation of Early Communication

Gedeon Deák

+ more

The literature on child language offers a bewildering array of data on the emergence of early language. There is evidence that prelinguistic social development, infants' own information-processing capacities, and richness of the language environment jointly explain the wide range of language skills seen in 1- and 2-year-old toddlers. There are, however, few studies that investigate all three factors (prelinguistic social skills, cognitive capacities, and language input) in tandem. I will describe preliminary findings from a study that does just that: a longitudinal sample of infants followed from 3 to 22 months. I will focus on individual differences in infants' attention-sharing skills in controlled tasks (mostly from 9 to 12 months), on maternal naturalistic speech variability, including amount of talk, diversity of vocabulary, use of "mental verbs," and discourse markers of 2nd-person address (i.e., infant's name and "you"). I will describe relations among those variables, and indicate which ones uniquely predict language skills at 12 and 18 months.


Can native-language perceptual bias facilitate learning words in a new language?

Bożena Pająk
(work in collaboration with Sarah Creel & Roger Levy)

+ more

Acquiring a language relies on distinguishing the sounds and learning mappings between meaning and phonetic forms. Yet, as shown in previous research on child language acquisition, the ability to discriminate between similar sounds does not guarantee success at learning words contrasted by those sounds. We investigated whether adults, in contrast to young infants, are able to attend to phonetic detail when learning similar words in a new language. We tested speakers of Korean and Mandarin to see whether they could use their native-language-specific perceptual biases in a word-learning task. Results revealed that participants were not able to fully capitalize on their perceptual abilities: only better learners -- as independently assessed by baseline trials -- showed enhanced learning involving contrasts along phonetic dimensions used in their native languages. This suggests that attention to phonetic detail when learning words might only be possible for adults with better overall perceptual abilities, better learning skills, or higher motivation.


Cumulative semantic interference persists even in highly constraining sentences

Dan Kleinman

+ more

When speakers engage in conversation, they often talk about multiple members of the same semantic category. Given this, it seems inefficient that subjects name pictures (e.g., cow) slower when they have previously named other (and more) members of the same semantic category (horse, pig; Howard et al., 2006). Of course, in normal speech, words are typically produced in rich semantic contexts. In my talk, I will present the results of two experiments that investigate whether this cumulative semantic interference effect (CSIE) persists even when pictures are presented in such a context; i.e., after high-cloze sentences.

In each of two experiments, 80 subjects named 60 critical pictures, comprising 12 semantic categories of five pictures each, in two blocks. In both blocks of Experiment 1 and the first block of Experiment 2, half of the pictures in each block were presented in isolation; the other half were preceded by high-cloze sentences presented via RSVP with the last word omitted (e.g., "On the class field trip, the students got to milk a ___"). In the second block of Experiment 2, every picture was presented in isolation.

Results from both experiments showed that although pictures were named nearly 200 ms faster in the sentence condition relative to the bare condition, CSIEs of equivalent size were observed within both conditions. Furthermore, Experiment 1 showed that this interference fully transferred between conditions: Naming cow slowed the subsequent naming of horse equally regardless of whether cow or horse were named in isolation or after a sentence. However, Experiment 2 showed that despite equivalent interference effects, pictures that were named after sentences in the first block (compared with pictures that were named in the bare condition in the first block) exhibited less repetition priming in the second block.

Three conclusions can be drawn from these results. First, they demonstrate that cumulative semantic interference persists -- undiminished in size -- even when pictures are named in richer semantic contexts, suggesting that CSI might affect more naturalistic speech. Second, they run counter to the predictions of Howard et al. (2006), whose model of CSI involves competitive lexical selection and incorrectly predicts that trials with faster naming latencies will show less interference; but comport with the error-based learning account of CSI advanced by Oppenheim et al. (2010). Third, the results potentially shed light on the nature of cloze, since Oppenheim et al. (2010) can explain the pattern of decreased repetition priming and unchanged CSIE in the sentence condition if the sentences used in Experiments 1 and 2 increased target activation while leaving competitor activation unchanged.


The Object of Whose Hands? Empathy and Movement in the Work of Literary Studies

Stephanie Jed Department of Literature

+ more

I investigate terms/concepts such as grasping, attention, representation, space, event, and intersubjectivity as they are embodied in literary studies and cognitive science research. My intent is: to theorize, in concrete ways, how our hands form part of the interpretive field, and to explore the viability of cross-disciplinary research between literature/history and cognitive science.


Niche Construction and Language Evolution

Hajime Yamauchi University of Berkeley

+ more

Like other new scientific enterprises, studies within evolutionary linguistics vary widely. While some argue that language owes its phylogenetic explanation to simple brain evolution (i.e., biological evolution); others promote a view that language is a complex meme replicated through acquisition, and hence has evolved to be a better replicator for the brain (cultural evolution). These divisions reflect the notorious polarization of the nature-nurture problem. Unlike traditional linguistics, however, the intersection of the two camps, known as brain-language coevolution, is where the most exciting findings are expected. Unfortunately, however, despite its promising perspective, studies in this domain have been conspicuously lacking.

In this presentation, I will discuss language acquisition as a key aspect of this coevolutionary process: it is a "differential gear" connecting the two wheels revolving in different timescales (i.e., biological and cultural evolution). With a computer simulation, I will demonstrate that language entails a modification not only on the selective environment, but also on the learning environment; the learning environment in one generation is dynamically created by the previous generations' linguistic activities (and itself forms a selective environment). If such modifications on the learning environment affect learnability of a given language, and hence the cost of learning, it will induce an evolutionary process on language acquisition.


How our hands help us think

Susan Goldin-Meadow

+ more

When people talk, they gesture. We now know that these gestures are associated with learning. They can index moments of cognitive instability and reflect thoughts not yet found in speech. What I hope to do in this talk is raise the possibility that gesture might do more than just reflect learning––it might be involved in the learning process itself. I consider two non-mutually exclusive possibilities: the gestures that we see others produce might be able to change our thoughts; and the gestures that we ourselves produce might be able to change our thoughts. Finally, I explore the mechanisms responsible for gesture's effect on learning––how gesture works to change our minds.


Language, Sensori-Motor Interfaces, and Time: Temporal Integration Windows in the Perception of Signed and Spoken Languages

So-One Hwang

+ more

Linguistic structures are processed in time, whether listening to acoustic speech or viewing the visual input of sign language. In this talk, I will discuss the perceiver's sensitivity to the rate at which linguistic form and meaning unfold for integrating the sensory input in time chunks. The duration or size of time windows for integrating the input is tested by measuring the intelligibility of locally-reversed sentences in American Sign Language and making comparisons with findings from speech. In a series of three perceptual experiments, the results demonstrate 1) the impact of modality (auditory versus visual processing) on the duration of temporal integration windows, where visually based ASL is dramatically more resistant to this temporal distortion than spoken English and involves longer time-windows for integration, 2) modality independent properties of temporal integration where duration is directly linked with the rate of linguistic information in both signed and spoken languages, and 3) the impact of age of language acquisition on temporal processing. These findings have implications for the neurocognitive underpinnings of integration in perception and rates in production and the role of input in early development for these aspects of language processing.


What You Expect When You're Expecting: Listener Modeling of Speakers in Language Comprehension

Rachel Ostrand

Recruiting auditory space to reason about time

Esther Walker

Testing phonological organization in bilinguals: An event-related brain potential study

Carson Dance


Neural Correlates of Auditory Word Processing in Infants and Adults

Katie Travis

+ more

Although infants and adults both learn and experience words frequently in the auditory modality, much more is known about the neural dynamics underlying visual word processing. Even more limited is knowledge of the brain areas supporting developing language abilities in infants. In this talk, I will describe findings from three related studies that help to advance current understanding of neurophysiological processing stages and neural structures involved in auditory word processing in both the developing and mature brain. Briefly, the first study I will present reveals new evidence from adults for an early neural response that is spatially and temporally distinct from later, well-established neural activity thought to index the encoding of lexico-semantic information (N400). The second study I will describe finds evidence to suggest that infants and adults share similar neurophysiological processes and neuroanatomical substrates for spoken word comprehension. Finally, I will discuss results from a third study in which we find evidence for neuroanatomical structural changes within cortical areas thought to be important for word understanding in 12-19 month old infants.


Cultural emergence of combinatorial structure through iterated learning of whistled

Tessa Verhoef University of Amsterdam

+ more

In human speech, a finite set of basic sounds is combined into a (potentially) unlimited set of well-formed morphemes. Hockett (1960) termed this phenomenon 'duality of patterning' and included it as one of the basic design features of human language. Of the 13 basic design features Hockett proposed, duality of patterning is the least studied and it is still unclear how it evolved in language. Hockett suggested that a growth in meaning space drove the emergence of combinatorial structure: If there is a limit on how accurately signals can be produced and perceived, there is also a limit to the number of distinct signals that can be discriminated. When a larger number of meanings need to be expressed, structured recombination of elements is needed to maintain clear communication. However, it has been demonstrated in the case of a fully functional and expressive new sign language that it has still emergent combinatorial structure (Sandler et al., 2011). This case questions whether the emergence of combinatorial structure is necessarily driven by a growing meaning space alone. Furthermore, experimental work on the emergence of combinatorial structure in written symbols (del Giudice at el., 2010), as well as work I will present in this talk, show that this structure can emerge through cultural transmission, even in the case of a small vocabulary. It seems therefore to be an adaptation to human cognitive biases rather than a response to a growth in vocabulary size. In these experiments we use the method of experimental iterated learning (Kirby et al., 2008), which allows investigating cultural transmission in the laboratory. This method simulates iterated learning and reproduction, in which the language a participant is trained on is the recalled output that the previous participant produced. The experiment I will present investigates the emergence of combinatorial structure in an artificial whistled language. Participants learn and recall a system of sounds that are produced with a slide whistle, an instrument that is both intuitive and non-linguistic so that interference from existing experience with speech is blocked. I show from a series of experiments that transmission from participant to participant causes the system to change and become cumulatively more learnable and more structured. Interestingly, the basic elements that are recombined are comprised of articulatory movements rather than acoustic features.


del Giudice, A., Kirby, S., & Padden, C. (2010). Recreating duality of patterning in the laboratory: a new experimental paradigm for studying emergence of sublexical structure. In A. D. M. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Evolang8 (pp. 399-400). World Scientific Press.

Hockett, C. (1960). The origin of speech. Scientific American, 203, 88-96.

Kirby, S., Cornish, H., & Smith, K. (2008). Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences, 105(31), 10681-10686.

Sandler, W., Aronoff, M., Meir, I., & Padden, C. (2011). The gradual emergence of phonological form in a new language. Natural Language & Linguistic Theory. 29(2), 503-543


The relationship between referential and syntactic dependency processing

Christopher Barkley

+ more

Natural language is full of long-distance relationships in which two non-local elements depend on each other for full interpretation. In the event-related brain potential (ERP) literature, so called filler-gap dependencies ("Which movie would you like to see __?") have received the bulk of the attention, and a relatively consistent picture has emerged in terms of the ERP responses associated with the processing of these types of syntactic relationships. Here we investigate another type of long-distance dependency, namely the co-referential relationship between a pronoun and its antecedent ("John thought that he should probably see the movie too").

Previous studies of referential processing have relied on violating various morpho-syntactic features of the pronoun-antecedent relationship such as agreement or binding (Osterhout & Mobley, 1995), or on manipulating ambiguity of reference (Van Berkum et al., 1999, 2003, 2007), but none have investigated the brain’s response to simple and unambiguous relationships between pronouns and their antecedents as we do here. We hypothesized that, while syntactic and referential dependencies have been analyzed very differently and kept maximally separate in the theoretical linguistics literature, they pose the same basic processing challenges for the brain, and therefore that similar brain responses should be observed in response to the second element in each dependency type. Our results revealed an interesting pattern of similarities and differences across dependency types, and will be discussed in terms of the relationship between syntactic and referential dependency formation and with regard to the functional identity of the left anterior negativity (LAN). They will also be placed in the context of the extant ERP literature on referential processing, and discussed in terms of the potentially fruitful bi-directional relationship between processing data and linguistic theory construction.


Acquiring a first language in adolescence: Behavioral and neuroimaging studies in American Sign Language

Naja Ferjan Ramirez

+ more

What is the process of language acquisition like when it begins for the first time in adolescence? Is the neural representation of language different when acquisition first begins at an older age? These questions are difficult to answer because language acquisition in virtually all hearing children begins at birth. However, among the deaf population are individuals who have been cut-off from nearly all language until adolescence; they cannot hear spoken language and, due to anomalies in their upbringing, they have not been exposed to any kind of sign language until adolescence. I will first discuss the initial language development of three deaf adolescents who, due to anomalies in upbringing, began to acquire American Sign Language (ASL) as their first language (L1) at age 14 years. Using the ASL-CDI and detailed analyses of spontaneous language production we found that adolescent L1 learners exhibit highly consistent patterns of lexical acquisition, which are remarkably similar to child L1 learners. The results of these behavioral studies were then used to create the stimuli for a neuroimaging experiment of these case studies. Using anatomically constrained magnetoencephalography (aMEG), we first gathered pilot data by investigating the neural correlates of lexico-semantic processing in deaf native signers. Results show that ASL signs evoke a characteristic event-related response peaking at ~400 ms post-stimulus onset that localizes to a left- lateralized fronto-temporal network. These data agree with previous studies showing that, when acquired from birth, the localization patterns of ASL processing are similar to those of spoken language. Using the same experimental protocol we then neuroimaged two cases who had no childhood language and found that their brain responses to ASL signs look remarkably different from those of native signers, indicating that delays in language acquisition severely affect the neural patterns associated with lexico-semantic encoding. Our results suggest that language input in early childhood, spoken or signed, is critical for establishing the canonical left-hemisphere semantic network.


Why would musical training benefit the neural encoding of speech? A new hypothesis

Aniruddh Patel

+ more

Mounting evidence suggests that musical training benefits the neural encoding of speech. This paper offers a hypothesis specifying why such benefits occur. The "OPERA" hypothesis proposes that such benefits are driven by adaptive plasticity in speech-processing networks, and that this plasticity occurs when five conditions are met. These are: (1) Overlap: there is anatomical overlap in the brain networks that process an acoustic feature used in both music and speech (e.g., waveform periodicity, amplitude envelope), (2) Precision: music places higher demands on these shared networks than does speech, in terms of the precision of processing, (3) Emotion: the musical activities that engage this network elicit strong positive emotion, (4) Repetition: the musical activities that engage this network are frequently repeated, and (5) Attention: the musical activities that engage this network are associated with focuse d attention. According to the OPERA hypothesis, when these conditions are met neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. Yet since speech shares these networks with music, speech processing benefits. The OPERA hypothesis is used to account for tdhe observed superior subcortical encoding of speech in musically trained individuals, and to suggest mechanisms by which musical training might improve linguistic reading abilities.


Rhythm classes in speech perception

Amalia Arvaniti

+ more

A popular view of rhythm divides languages into three rhythm classes, stress-, syllable- and mora-timing. Although this division has not been supported by empirical evidence from speech production (e.g. Arvaniti, 2009; Arvaniti, to appear), it has been generally adopted in the fields of language acquisition and processing based on perception experiments that appear to support the notion of rhythm classes. However, many of the perceptual experiments are amenable to alternative interpretations. Here this possibility is explored by means of a series of perception experiments. In the first two experiments, listeners were asked to indirectly classify impoverished stimuli from English, German, Greek, Korean, Italian and Spanish by rating their similarity to non-speech trochees (the closest non-speech analog to stress-timing). No evidence was found that listeners rated the languages across rhythm class lines; results differed depending on the type of manipulation used to disguise language identity (in experiment 1, low-pass filtering; in experiment 2, flat sasasa in which consonantal intervals are turned into [s], vocalic ones into [a] and F0 is flattened). In a second series of five AAX experiments English was compared to Polish, Spanish, Danish, Korean and Greek in a 2*2 design: the (sasasa) stimuli either retained the tempo (speaking rate in syllables per second) of the original utterances or had all the same tempo (average of the two languages in each experiment); F0 was either that of the original utterances or flattened. Discrimination was based largely on tempo, not rhythm class, while the role of F0 depended on tempo: when tempo differences were large, F0 hindered discrimination but when they were small it enhanced discrimination for the pairs of languages that differ substantially in F0 patterns (especially English vs. Korean). The results overall do not support the idea that rhythm classes have a basis in perception. They further show that the popular sasasa signal manipulation is not ecologically valid: results differed depending on whether additional prosodic information provided by F0 was present or not, suggesting that the timing information encoded in sasasa is not processed independently of the other components of prosody. Finally, the results of the second series of experiments strongly suggest that results interpreted as evidence for rhythm classes are most likely due to a confound between tempo and rhythm class.


Rational imitation and categorization in a-adjective production

Jeremy Boyd

+ more

How do language learners acquire idiosyncratic constraints on the use of grammatical patterns? For example, how might one determine that members of the class of English a-adjectives cannot be used prenominally (e.g., ??The asleep/afloat/alive duck…, cf. The duck that's asleep/afloat/alive…)? In this talk I present evidence indicating (1) that learners infer constraints on the use of a-adjectives by evaluating distributional patterns in their input, (2) that the constraint against prenominal a-adjective usage is abstract and generalizes across members of the a-adjective class, and (3) that learners shrewdly evaluate the quality of their input, and in fact disregard uninformative input exemplars when deciding whether a grammatical constraint should be inferred. Moreover, the existence of similar types of reasoning in non-linguistic species suggests the presence of phylogenetically conserved mechanisms that, while not specific to language, can be used to arrive at conclusions about what forms are and are not preferred in grammar.


The Gradient Production of Spanish-English Code-Switching

Page Piccinini

+ more

It is generally assumed that in code-switching (CS) switches between two languages are categorical, however, recent research suggests that the phonologies involved in CS are merged and bilinguals must actively suppress one language when encoding in the other. Thus, it was hypothesized that CS does not take place abruptly but that cues before the point of language change are also present. This hypothesis is tested with a corpus of Spanish-English CS examining word-initial voiceless stop VOT and the vowel in the discourse marker "like." Both English and Spanish VOTs at CS boundaries were shorter, or more "Spanish-like," than in comparable monolingual utterances. The vowel of "like" in English utterances was more monophthongal and had a lower final F2 as compared to "like" in Spanish utterances. At CS boundaries, "like" began similarly to the language preceding the token and ended similarly to the language following it. For example, in a "English-like-Spanish" utterance, initial formant measurements were more English-like but final measurements more Spanish-like. These results suggest code-switching boundaries are not categorical, but an area where phonologies of both languages affect productions.


Do you see what I mean? Cognitive resources in speech-gesture integration

Seana Coulson

+ more

Often when people talk, they move their bodies, using their hands to indicate information about the shape, size, and spatial configuration of the objects and actions they're talking about. In this talk, I'll discuss a series of experiments in my lab that examined how gestural information affects discourse comprehension. We find that individuals differ greatly in their sensitivity to co-speech gestures, and suggest visuo-spatial working memory (WM) capacity as a major source of this variation. Sensitivity to speech-gesture congruity correlates positively with visuo-spatial WM capacity, and is greatest in individuals with high scores on tests of visuo-spatial WM, but low scores on tests of verbal ability. These data suggest an important role for visuo-spatial WM in speech-gesture integration as listeners use the information in gestures to help construct more visually specific situation models, i.e. cognitive models of the topic of discourse.


Incremental lexical learning in speech production: a computation model and empirical evaluation

Gary Oppenheim

+ more

Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have claimed that the findings require particular mechanisms for lexical selection. Here I will present and evaluate a simple model of lexical retrieval in speech production that applies error-driven learning to its lexical activation network. This model naturally produces repetition priming and semantic interference effects. It predicts the major findings from several published experiments, and model analyses suggest that its effects arise from competition during the learning process, requiring few constraints on the means of lexical selection. New empirical work confirms a core assumption of the learning model by demonstrating that semantic interference persists indefinitely -- remaining detectable at least one hundred times longer than reported in any previous publication -- with no indication of time-based decay.


L2 phonological learning as a process of inductive inference

Bozena Pajak

+ more

Traditional approaches to second language (L2) phonological learning assume that learners map L2 inputs onto existing category inventories available in the native language (L1). We propose a very different model in which the acquisition of novel phonological category inventories proceeds through general categorization processes, in which knowledge of L1 and other languages provides inductive biases. This approach views linguistic knowledge as hierarchically organized such that the outcome of acquisition of a language includes not only knowledge of the specific language in question, but also beliefs about how languages in general are likely to be structured. In this talk we present results of two experiments that test the predictions of the model regarding how two sources of information—distributional information from a novel L2 and inferences derived from existing language knowledge—combine to drive learning of L2 sound categories.


What's in a Rise? Effects of Language Experience on Interpretation of Lexical Tone

Carolyn Quam

+ more

Models of sound-categorization and word learning must address how second-language learners apply existing sound categories to learn a new language and how/whether established bilinguals differentially attend to acoustic dimensions when processing each language. Here we consider interpretation of pitch, which in English conveys sentence-level intonational categories (e.g., yes/no questions) but in Mandarin contrasts words. We addressed three questions: How accurately is tone information exploited in on-line word recognition? Does this differ for familiar versus newly learned words? Does this differ depending on language experience? In our eye-tracking paradigm, Mandarin-English bilinguals and English monolinguals learned and were tested on novel words. Bilinguals also completed a familiar-word recognition task and two language-dominance/proficiency measures. For bilinguals recognizing familiar Mandarin words, eye-movements revealed that words differing minimally in their segments were recognized faster than words differing in their tones (t(46)=5.2, p<.001). However, this segments>tones difference weakened as Mandarin proficiency increased (r=-0.41, p<.005). Lower-proficiency bilinguals might have exploited tone less because of less experience with the words, so we asked whether newly learned words would also show effects of Mandarin proficiency/knowledge. Clicking responses revealed that monolinguals (t(10)=4.23, p<.005) and bilinguals (t(47)=8.15, p<.001) were less accurate with different-tone than with different-vowel words, regardless of bilinguals’ Mandarin proficiency. Our experiments suggest more difficulty exploiting tone than segments unless word familiarity and Mandarin proficiency are high. This provides a more nuanced view than previous studies (Malins & Joanisse, 2009; Wong & Perrachione, 2007) of the impact of language background on tone interpretation in word-learning and retrieval.


Retrieving words requires attention

Dan Kleinman

+ more

Even though speaking usually feels like an automatic process, it isn't - at least, not entirely, as we know from studies showing that talking on a cell phone impairs driving performance. Which stages of language production require attentional resources, and which are automatic? In my talk, I will focus on this question with respect to lemma selection, the stage at which the word to be produced is selected from a speaker's lexicon.
Prior research has investigated this topic using dual-task experiments. Dell'Acqua et al. (2007) presented subjects on each trial with a tone and then, after some delay, a picture with a visually superimposed word (the picture-word interference task). Subjects categorized the pitch of the tone and then named the picture while ignoring the word, which was either semantically related or unrelated to the picture name. They found semantic interference at delays of 350 and 1000 ms but not 100 ms. In keeping with the logic of dual-task experiments, they concluded that lemma selection could co-occur with attention-demanding tone processing, suggesting that it could be performed automatically. This finding is surprising for two reasons: First, it localizes lemma selection to a stage of processing that typically consists of low-level perceptual processing. Second, prior research has shown that attention is required to resolve competition in the Stroop effect, to which picture-word interference is often compared.


Neither hear nor there: a 'metrical restoration' effect in music

Sarah Creel

+ more

What happens in your mind when you hear music? That is, what memories become activated--Where you were when you first heard the song? The time you played it in middle school band? Other similar pieces of music? Recent work in my lab suggests that, much as with linguistic material, listeners hearing music activate detailed memory representations of previous hearings. In this talk I will outline a series of music perception experiments that ask what information gets activated when you hear melodies of varying familiarity. Specifically, I manipulate each listener's musical experience--for instance, listener 1 might hear a melody in Context A, and listener 2 might hear the same melody in Context B--where each context consists of instruments that play at the same time as the melody. I then present both listeners with the melody out-of-context, and probe for effects of the experienced context.
In an initial study (Creel, in press, JEPHPP), I found that listeners retained melody-specific memory for meter. That is, depending on which context they had heard initially, they thought the "beats" fell in different places in a particular melody. An even more interesting question is how memory representations of specific musical experiences might influence processing of other music you hear, such as a Beatles song or Vivaldi concerto you haven’t heard before. Ongoing work is exploring the circumstances under which melody-specific memory influences the processing of new melodies. These results not only imply that listeners activate musical information in a style-specific manner, but also suggest a mechanism by which musical styles might be learned. This approach is somewhat at odds with explanations of music perception that focus on surface cues alone, in that it suggests a strong, specific role for memory. I will also discuss the current work's implications for processing of metrical information in language.


Tomorrow, uphill: Topography-based construals of time in an indigenous group of Papua New Guinea

Rafael Nunez & Kensy Cooperrider

+ more

Do humans everywhere share the same basic abstract concepts? Time, an everyday yet fundamentally abstract domain, is conceptualized in terms of space throughout the world’s cultures. Specifically, linguists and psychologists have presented evidence of a widespread pattern in which deictic time— past, present, and future— is construed according to a linear front/back axis. To investigate the universality of this pattern, we studied the construal of deictic time among the Yupno, an indigenous group from the mountains of Papua New Guinea, whose language makes extensive use of allocentric topographic (uphill/downhill)— but not egocentric (front/back)— terms for describing spatial relations. The pointing direction of their spontaneous co-speech temporal gestures— analyzed via spherical statistics and topographic information— provides evidence of a strikingly different pattern in their time concepts. Results show that the Yupno construe deictic time spatially in terms of allocentric topography: the past is construed as downhill, the present as co-located with the speaker, and the future as uphill. The Yupno construal reflects particulars of the local terrain, and, in contrast to all previous reports, is not organized in terms of opposite directions along a “time-line”. The findings have implications for our understanding of fundamental human abstract concepts, including the extent to which they vary and how they are shaped by language, culture, and environment.


Uncertainty about Previous Words and the Role of Re-reading in Sentence Comprehension

Emily Morgan

+ more

Models of sentence comprehension and of eye-movements in reading have generally focused on the incremental processing of sentences as new words become available, but have paid less attention to the possibility of rereading a previous word. There is recent evidence, however, that downstream information can cause a comprehender to question their belief about a previous word. In this case, a reasonable strategy might be to gather more visual input about the previous word in light of this new information. I will present work in progress on a series of eye-tracking experiments investigating uncertainty in mental representations of visual input and the role of re-reading in sentence comprehension.


ERP Investigations of Causal Inference Processing

Tristan Davenport

+ more

In this talk I report the results of two experiments investigating the effects of causal inference on word processing. EEG was recorded as subjects listened to short stories containing causal coherence gaps, each one followed by a visual probe word selected to index causal inferential or lexical associative processing. In experiment 1, we compare the influences of these two types of context on word processing and find that facilitation effects due to causal inference begin earlier and last longer than those attributed to lexical association. In experiment 2, the first of several planned variations using these materials, the probe words were presented in visual hemifield to assess hemispheric asymmetries in using lexical and inferential context. Results tentatively suggest a right-hemisphere basis for causal inference effects. Taken together, these results are consistent with models of top-down language processing, with different contextual variables weighted by how well they predict the current word. The results of experiment 2 additionally suggest a neural dissociation between these two aspects of language processing.


Language, Structure, & Thought

David Barner

+ more

I will describe three approaches to studying the relationship between linguistic structure & thought: object perception, counting, and mental math. Together these studies argue that although language provides important structure for guiding inference when learning words & concepts, we do not use it to create qualitatively novel representations. Words act as windows to thought, selecting from among pre-existing representations, or recycling them for new purposes.


Neurocognitive Indices for Event Comprehension

Hiromu Sakai Hiroshima University

+ more

Recognition of event type takes a highly significant part in sentence comprehension. In head-final languages, the predicates that play important roles in determining event type are processed at relatively late stages in the course of constructing semantic representation of sentences. This leads to interesting questions about when and how event comprehension is achieved in such languages. I conducted a series of behavioral and electro-physiological (event-related potential) experiments that address these issues. The results showed that aspectual mismatch of elements increased reading-time even before the parser encounters the predicates, and that the aspectual coercion elicited a left-frontal negativity associated with increased processing load. These findings suggest that event comprehension is carried out in incremental fashion in the course of constructing semantic representation of sentences even in head-final languages.


Tutorial on hierarchical/mixed-effects models for data analysis

Roger Levy et al

+ more

Hierarchical (also called "multi-level", or sometimes "mixed-effects") probabilistic models are becoming quite popular in all sorts of quantitative work on language, and with good reason: they can capture cross-cutting sources of variability at multiple levels of granularity, allowing researchers great flexibility in drawing generalizations from data. In this tutorial I give a brief introduction to the use of hierarchical models in linguistic data analysis. First I briefly review generalized linear models. I then go on to give a precise description of hierarchical generalized linear models, and cover both (approximate) maximum-likelihood and Bayesian methods for drawing inferences for such models. I continue with coverage of the crucial issue of how to interpret model parameter estimates and conduct hypothesis tests. Finally, I briefly discuss some ongoing work (joint with Hal Tily) on systematic comparisons of different ways of using these models for data analysis, how they compare with traditional ANOVA analyses, and (hopefully) progress towards reliable standards for the use of hierarchical models in linguistic data analysis that reaps their benefits while avoiding potential pitfalls.

The tutorial will mix conceptual and mathematical treatment with concrete examples using both simulated and real datasets. R code for much of the material covered in the tutorial will be made publicly available, as well.


Anticipation is making me look: Prediction, priming and computation in language understanding

Jim Magnuson University of Connecticut and Haskins Labs

+ more

Over the last several years, theories of human language processing have emerged that posit highly top-down architectures that actively and optimally forecast upcoming words. Simultaneously, there has been a resurgence of theories that assume modularity of initial processing and late integration of top-down information. I will describe two studies that address both trends. The first study uses eye tracking data to challenge optimality assumptions. Specifically, we find that some linguistic "anticipation" is less forward-looking that it appears in contemporary experimental paradigms. Much (though not all) anticipation may be explained by passive mechanisms like priming rather than optimal forecasting, greatly reducing the computational complexity that must be attributed to human language comprehension. The other study uses event related potentials (ERPs) to re-evaluate a central finding that motivates modularity assumptions in some theories, and reveals that a component argued to reflect encapsulated syntactic processing (the ELAN) is sensitive to anticipation based on nonlinguistic expectations. These seemingly contrary results are consistent with a variety of current theories that posit dual (active and passive) processing mechanisms, as well as dynamical systems approaches such as Tabor's self-organizing parser.


In Search of Grammar

Carol Padden

+ more

In the course of working on a new sign language used by a community of Bedouins in the Negev (Al-Sayyid Bedouin Sign Language, or ABSL), we discovered a type of lexical pattern that seems to have no precise parallel in spoken language. When we surveyed older sign languages, we found that they too exhibited a preferential pattern, which we call the object vs. handling pattern. ASL signers favor the object pattern, in which the physical properties of the object such as the length of a toothbrush or the teeth of a comb are represented. Signers of New Zealand Sign Language, a dialect of British Sign Language, favor the handling pattern in which they show how the object is held by hand such as grasping a toothbrush or holding a comb. The bias is never entirely exclusive, but strongly preferential. In principle, the lexical items of a given language could be divided evenly between the two types because they are equally iconic, but signers of unrelated sign languages are surprisingly consistent in their preference for one or the other pattern.

The discovery of a structure that seems specific to sign languages calls into question the task of identifying grammatical properties of human languages. Do human languages share an underlying set of structures, beyond which there are structures that differ depending on modality? Or are languages assemblages of structures that emerge in time using resources (literally) at hand - in the case of sign languages, gestural resources? The existence of this lexicalization pattern in ABSL is provocative for understanding properties of grammars: handling and instrument forms are equally iconic, yet in a new sign language, preferential structure emerges early in its history, at least by the second generation.


Behavioral and Electrophysiological Investigations into the Structure and Computation of Concrete Concepts

Ben Amsel

+ more

This talk addresses the computation and organization of conceptual knowledge. Specifically, I focus on the recruitment of concrete knowledge during single word reading, which I address with a number of behavioural and electrophysiological experiments. I'll present a study assessing how number of visual semantic features (listed by participants as being part of a given concept) influences both the speed of word meaning computation, and its neural underpinnings. I also assess the flexibility and timecourse of semantic knowledge activation as a function of specific task constraints using a series of behavioral studies and a single-trial ERP study. I argue that the results presented herein do not support pure unitary theories of semantic memory organization. I conclude that the dynamic timecourses, topographies, and feature activation profiles are most consistent with a flexible conceptual system, wherein dynamic recruitment of representations in modality-specific and supramodal cortex are a crucial element of word meaning computation in the brain.


The Development of Representations of Polysemous Words

Mahesh Srinivasan

+ more

A primary function of the representation of the meaning of a word is to link word forms with concepts--this ensures that when we hear a word, we activate the relevant concept, and that when we wish to communicate about some concept, we use the appropriate word form. The meaning of a word must be phrased at the appropriate level of granularity--it must be general enough to encode what the different uses of a word have in common (e.g., a core meaning of run must be general enough to apply to the different cases of humans and animals running), but cannot be so general that it also applies to meanings that the word is not used for (e.g., the meaning of run should not also be applicable to a snake's movement).

The focus of this talk is on the representation of the meanings of polysemous words--e.g., the use of book to refer to an object (the gray book) or to the content it contains (the interesting book); the use of chicken to refer to an animal (the thirsty chicken) or to the meat derived from that animal (the tasty chicken). Because the different uses of polysemous words often cross ontological boundaries, single core representations that encode what the different uses have in common would be too vague to properly constrain how polysemous words are used. One alternative, which I refer to as the List Model of polysemy, is that each of the uses of a polysemous word may be separately listed in memory and linked to separate concepts. This approach, however, misses important generalizations with respect to how polysemous words are used--for instance, in addition to words like book, words like video and record can refer to the objects and to the abstract content they contain, and in addition to words like chicken, words like lamb and fish can refer to animals and to the meat derived from them.

A first set of studies explored 4 and 5-year-old children's representations of the polysemous meanings of words like chicken. These studies provided evidence that early in development, polysemous meanings are not represented as separate words but instead rely on generative structures: lexical or conceptual structures that encode the relations between polysemous meanings and permit the meanings of these words to shift. A second set of studies examined whether generative structures could facilitate children's acquisition of polysemous meanings, by constraining their hypotheses about how the meaning of a novel word can shift. These findings are discussed with respect to the implications they have for the representational basis of flexible language.


WOMAN BOX PUSH, but *not* WOMAN BOY PUSH: How reversible events influence linguistic structure

Matt Hall

+ more

Human language structure is far from random, for two kinds of reasons: first, because we acquire language from input, and so learn the patterns that we were exposed to. But second, because we sometimes *fail* to learn or pass on the patterns in our input: instead, we gradually alter the system in systematic ways. Identifying these internal cognitive forces that compel language to take on particular forms has been one focus of my research.

In this talk, I argue that one of these forces is whether or not the patient of a transitive event could plausibly be the agent. (In other words, does semantics alone suffice for assigning thematic roles, or are other cues needed?) Using pantomime as a way to let participants sidestep the grammar of their native language, I show that despite a preference for SOV word order in non-reversible events (e.g. a woman pushing a box), participants actively avoid such descriptions of reversible events (e.g. a woman pushing a boy). I also show that some participants spontaneously invent proto-case marking for these cases. Taken together, the evidence suggests that while SOV may be preferred early in the development of a communicative system, the need to communicate about reversible events is its "Achilles heel", which contributes to the emergence of new linguistic structures.


How does a grammatical category emerge? The case of sign language agreement verbs

Irit Meir University of Haifa

+ more

Grammatical categories (often referred to as 'functional categories') play an important role in various syntactic theories, yet their nature is often poorly understood or just taken for granted. It is not clear, for example, how many categories there are, whether there is a universal set of categories, and whether there are any constraints on possible categories.

In this talk I argue that one way of getting a better understanding of the nature of grammatical categories is by taking a diachronic perspective, that is, by examining how a grammatical category is "born". I will trace the development of the category of agreement verbs in Israeli Sign Language (ISL), a class of verbs that denote transfer and is marked by a specific grammatical inflection. By analyzing the different stages that gave rise to this system, I provide evidence for the following claims:

1. A grammatical category may arise not only via grammaticalization of free words, but also as a result of back formation and reanalysis. Therefore "today's morphology" is not always "yesterday's syntax".

2. A grammatical category may be modality-dependent. This constitutes a challenge existing theories, especially to theories assuming a universal set of categories, as they do not predict modality-dependent functional categories.


Rhythm classes and the measuring of speech rhythm

Amalia Arvaniti

+ more

In the past decade, metrics that seek to measure durational variability in speech – such as the %V-ΔC of Ramus et al. (1999) or the PVIs of Grabe & Low (2002) – have been used to quantify the impressionistic division of languages into stress- and syllable-timing. Their initial success has bolstered the belief in rhythmic classes and has been used to support research on language acquisition and speech processing that relies on the idea of rhythmic classes as well. Yet research based on rhythm metrics is fraught with discrepancies which warrant further investigation. In this talk, I present results from production and perception that cast doubt on the validity of metrics as measures of rhythm and consequently on the idea of rhythm classes as a valid typological distinction.

In the production study, sentences, story reading and spontaneous speech were elicited from speakers of English, German, Spanish, Italian, Greek and Korean. The results show that metrics are less sensitive to cross-linguistic differences than to speaker-specific timing patterns, the elicitation task and the within-language variation reflected in the materials used. Overall, these results suggest that rhythmic classification based on measuring the durational variability of segmental intervals is difficult if not impossible to achieve with any consistency.

The perceptual results also show that classification based on the impression languages give (the original basis for rhythm classes) is equally difficult. Specifically, listeners heard either low-pass filtered sentences or sentences converted to "flat [sasasa]" – in which all vowel intervals are replaced by [a] and all consonantal intervals by [s] while F0 is flattened – and used a Likert scale to rate them for similarity to a series of non-speech trochees. It was hypothesized that stress-timed languages like English and German, the rhythm of which is said to be based on foot-initial prominences, would be rated more trochee-like than syllable-timed languages, such as Spanish or Italian, whose rhythm is said to be a cadence. The results provide no support that classification is driven by rhythm class, and indicate that the timing of consonantal and vocalic intervals is not processed independently of over prosodic cues (such as amplitude and F0).

Taken all together, these results strongly suggest that the classification into distinct rhythm classes cannot be achieved either on the basis of measuring particular timing characteristics of the speech signal or relying on the impression of rhythmicity languages give to listeners. These results cast doubt on the idea of rhythm classes and, consequently, on proposals about language acquisition and speech processing that rely on the categorization of languages along these lines. The reasons behind these results will be discussed, and proposals for an alternative view of speech rhythm and for protocols that can be used to investigate it experimentally will be presented.


"The phonemic restoration effect reveals pre-N400 effect of supportive sentence context in speech perception"

David Groppe

+ more

The phonemic restoration effect refers to the tendency for people to
hallucinate a phoneme replaced by a non-speech sound (e.g., a tone) in
a word. This illusion can be influenced by preceding sentential
context providing information about the likelihood of the missing
phoneme. The saliency of the illusion suggests that supportive context
can affect relatively low (phonemic or lower) levels of speech
processing, which would be consistent with interactive theories speech
perception (McClelland & Elman, 1986; Mirman, McClelland, & Holt,
2006) and predictive theories of cortical processing (Friston, 2005;
Summerfield & Egner, 2009). Indeed, a previous event-related brain
potential (ERP) investigation of the phonemic restoration effect
(Sivonen, Maess, Lattner, & Friederici, 2006) found that the
processing of coughs replacing high versus low probability phonemes in
sentential words differed from each other as early as the auditory N1
(120-180 ms post-stimulus); this result, however, was confounded by
physical differences between the high and low probability speech
stimuli. Thus it could have been caused by factors such as habituation
and not by supportive context. We conducted a similar ERP experiment
avoiding this confound by using the same auditory stimuli preceded by
text that made critical phonemes more or less probable. We too found
the robust N400 effect of phoneme/word probability, but did not
observe the early N1 effect. We did however observe a left posterior
effect of phoneme/word probability around 192-224 ms. It is not yet
clear what level of processing (e.g., phonemic, lexical) produced this
effect, but the effect is clear evidence that supportive sentence
context can affect speech comprehension well in advance of the
lexical/post-lexical semantic processing indexed by the N400. While a
pre-N400 effect is supportive of interactive theories of speech
perception and predictive theories of cortical processing, it is
surprising that yet earlier effects weren't found if these theories
are indeed true.

This work was completed in collaboration with Marvin Choi, Tiffany
Huang, Joseph Schilz, Ben Topkins, Tom Urbach, and Marta Kutas.


"Perceiving speech in context: Neural and behavioral evidence for continuous cue encoding and combination"

Joe Toscano Department of Psychology, University of Iowa

+ more

A classic problem in speech perception concerns the lack of a one-to-one, invariant mapping between acoustic cues in the sound signal and phonological and lexical-level representations. A great deal of this variability is due to different types of context effects, such as variation in speaking rate, differences between talkers¹ voices, and coarticulation from surrounding segments. Within each of these domains, a number of specialized solutions have been proposed. Here, I argue that general cue-integration principles may be sufficient for explaining context effects. Crucially, these principles can be implemented as relatively simple combinations of continuous cues, allowing listeners to integrate multiple, redundant sources of information and factor out predictable variation. This approach suggests that listeners encode acoustic cues independently of phonological categories and that techniques used to describe how they combine multiple cues may also apply to certain context effects. To assess these predictions, I present work using a recently developed ERP technique that allows us to examine cue encoding and phonological categorization, as well as experiments looking at listeners' use of multiple cues in a visual world eye-tracking task. In addition, I describe work extending this cue-integration approach to examine effects in spoken word recognition and experiments looking at whether context effects occur at the level of encoding or categorization. Together, these results suggest that general mechanisms of cue-integration may be much more powerful for handling variability in speech than previously thought.


Investigating the Time Course of Accented Speech Perception

Melanie Tumlin

+ more

While a fair amount of research has explored how adult listeners perceptually accommodate to accented speech, significantly less is known about how accent information is integrated in on-line processing, and about how children handle this type of variability in spoken language. I will discuss a proposed set of experiments aimed at addressing these questions using eye-tracking.


Cumulative processing factors in negative island contexts

Simone Gieselman

+ more

Why is a particular sentence perceived as unacceptable? Is it because it violates a global grammatical constraint or because different factors conspire in a cumulative fashion to produce this effect? This question is especially pertinent within the linguistic literature on so-called island phenomena. In this talk, we focus on a particular kind of island created by negation (e.g. "How fast didn’t the intern complete the project?" – N.B. the sentence is fine without negation). We show that by using acceptability rating as a measure, we are able to isolate factors (negation, extraction, and referentiality) that account for gradience in the acceptability judgment data and eventually lead to unacceptability. There is a large literature on the difficulty of processing _negation_. Though the reason for this cost is still subject to debate, what is uncontroversial is that an appropriate discourse context is important. _Extraction_ has been associated with working memory load, and negative islands are contexts in which the processing of negation and working memory costs interact. _Referentiality_ has been primarily discussed in the syntactic literature on extraction, but is a fundamentally semantic notion. Our results show interaction effects among all three factors, which may be evidence that they draw on interrelated cognitive resources. We also report results comparing the effects of negation and of also, a so-called presupposition trigger that imposes conditions on the discourse. The presupposition trigger and negation seem to interact with extraction in a very similar fashion, indicating that it is indeed the costs created by the discourse conditions of negation that interact with extraction.


Neural substrates of rhythm, timing, and speech comprehension

Sonja Kotz Max Planck Institute for Human Cognitive and Brain Sciences (Leipzig)

+ more

Cortical neural correlates of linguistic functions are well documented in the neuroscience and the neuropsychological literature. However, the influence of non-linguistic functions such as rhythm and timing are still understudied in speech comprehension (see Kotz & Schwartze, 2010). This is surprising as rhythm and timing play a critical role in learning, can compensate acquired and developmental speech and language disorders, and further our understanding of subcortical contributions to linguistic and non-linguistic functions. For example, recent neuroimaging and clinical evidence has confirmed the contributions of classical motor control areas (cerebellum (CE), basal ganglia (BG), supplementary motor area (SMA)) in rhythm, timing, music, and speech perception (Chen et al., 2008; Grahn et al., 2007; Geiser et al., 2009; Kotz et al., 2005; 2009). We consider serial order and temporal precision to be the mechanisms that are shared in simple and complex motor behaviour (e.g. Salinas, 2009) and speech comprehension (Kotz et al., 2009). Here we investigate with event-related brain potentials (ERPs) and functional magnetic resonance imaging (fMRI) (1) how syntax, adhering to serial and hierarchical order, and rhythm, organizing the temporal unfolding of utterances in speech, interact, and (2) how classical motor areas interface with supposed specialized areas in the perisylvian speech comprehension network. Our results reveal an interaction of syntax and rhythm in the P600 ERP component that is linked to sentential integration processes (Schmidt-Kassow & Kotz, 2009), a facilitatory effect of rhythmic regularity in classical perisylvian speech areas such as the superior temporal gyrus/sulcus (STG/STS), and the recruitment of classical motor areas (preSMA, lateral premotor cortex, BG, and CE) highlighting the impact of rhythm on syntax in speech comprehension.


Look before you speak: The impact of perceptual, lexical, and conceptual accessibility on word ordering

Stefanie Kuchinsky Medical University of South Carolina

+ more

Given the amount of visual information in a scene, how do speakers determine what to talk about first? One hypothesis is that speakers start talking about what has attentional priority, while another is that speakers first extract the scene gist, using the obtained relational information to generate a rudimentary sentence plan before retrieving individual words. I will present experiments which evaluate these views by examining the conditions under which different types of information may be relevant for production. I do so by employing a modified version of Gleitman, January, Nappa, and Trueswell’s (2007) attentional cuing paradigm in which participants were found to be more likely to begin picture descriptions with a particular actor if their attention had been drawn to it. I examine the extent to which these effects are modulated by the amount of time given to extract the scene gist and by the ease of identifying the pictured event and actor names. I suggest that perceptual factors influence word ordering only when conceptual information is not immediately available or insufficient for generating an utterance framework.


Nouns, Verbs, Arguments and Iconicity in an Emerging Sign Language

John Haviland

+ more

Zinacantec Family Homesign (ZFHS) is a new sign language developed in a single household in highland Chiapas, Mexico, where the deaf signers are surrounded by speakers of Tzotzil (Mayan). Such a new language and its highly iconic sign vehicles challenge easy assignment of such foundational linguistic elements as ‘part-of-speech’ categories and concomitant analysis of clause structure, especially syntactic expression of verbs and their arguments.


Toward organizing principles for brain computation of language and cognition

Ned Sahin

+ more

The brain basis of human cognition is often described in terms of functional *regions*, largely because of the “dumb luck” that one of the relevant dimensions for brain organization is spatial: Anatomical regions like Broca’s and Wernicke’s areas have measurably distinct properties, and anatomically restricted injuries like strokes (and more recently anatomical imaging like fMRI) make it convenient to characterize them. However, the spatial dimension is of course not the only organizing principle of the brain. As just one example, I recently probed within Broca’s area, and found that within a single sub-region there were three distinct neural processing stages for three linguistically orthogonal aspects of word production: meaning, structure and sound form (peak activity at ~200, 320, and 450 ms) (Science, 2009). [This was enabled by the wonderful privilege to record intra-cranial electrophysiology (ICE) from electrodes implanted in the awake behaving human brain to guide surgery.] Multiplexing in the time dimension is therefore a necessary organizing principle for language processing, however it is not sufficient. For instance, in a separate ICE data set, I found that language-related brain circuits at a given location and time oscillated at multiple frequencies, and these oscillatory bands had distinct temporal dynamics, correlated with distinct linguistic information, and indicated distinct physiological processes (e.g. cell firing vs. EPSPs). This allowed for a rudimentary process flow diagram of word production, from early visual input (~60ms) to articulatory output (~600ms and beyond) with multiple serial and parallel stages. However, even though the combination of spatial, temporal, frequency, and physiological dimensions may get us a little further toward the organizing principles of *individual* computational entities, there remains a larger challenge, namely in understanding how they work *together*. As an analogy, consider a team of specialists recruited for a complex project. The efficiency they offer is lost unless you can a.) divvy parts of the project to the correct specialist, and then crucially b.) reassemble their individual output into a single solution. I will discuss one very recent ICE result that might suggest a possible organizing principle for how the brain addresses this challenge. Task-activated cell populations resonated in sync (phase-locked) with other populations near and far, in three distinct waves, consistent with the following model. The early wave readies the entire cortical system (from visual to motor) and divvies up the task among specialized circuits. During the middle wave, the actual linguistic processing takes place within the individual entities. In the final wave (around the time of the utterance), results from the specialized processing are brought together into a single holistic representation for output (e.g. a single grammatically-inflected word). These various candidate dimensions and principles will be discussed in terms of future directions.


Quantifiers more or less quantify on-line: ERP evidence for partial incremental interpretation

Tom Urbach

+ more

There is plenty of evidence that people can construct rich representations of sentence meaning essentially word by word (incremental interpretation). But there is also clear evidence of systematic shallow (partial, underspecified, good enough) interpretation. What is the holiday where kids dress up in costumes and go door to door giving out candy?. There isn't one, though on Halloween kids typically get candy (Reder & Kusbit, 1991). If a strong general principle of incremental interpretation with full, immediate semantic interpretation is untenable, it is unclear what principle(s) governing the speed and depth of interpretation should replace it. One way forward is to learn more about special cases that may prove diagnostic.

In this talk I will present results from three experiments that used plausibility judgments and ERPs in combination to track the time course of interpretation of quantified noun phrases (most farmers, a small number of engineers) and adverbs of quantification (often,rarely) in isolated sentences. The designs cross these quantifier types with general knowledge as in, [Most/few] farmers grow [crops/worms]. In post-sentence plausibility judgments we observed a cross-over interaction in which, crucially, "Few farmers grow worms" was rated more plausible than "Few farmers grow crops". We also found that quantifiers modulated N400 amplitude in the expected direction at the critical object noun (crops/worms) but this effect fell well short of the cross-over interaction observed for the plausibility judgments.

Together (and only together) these results suggest that the comprehension system does register the meaning of quantifier expressions to at least some degree as they are initially encountered (incrementally) but that the full semantic interpretations in evidence at the plausibility judgments don't emerge until later.


Understanding language evolution through iterated learning: some ongoing experimental work

Simon Kirby School of Philosophy, Psychology and Language Sciences, University of Edinburgh

+ more

Language is not only a learned behaviour, it is also one that persists over time by learners learning from the behaviour of other learners. The implications of this process of "iterated learning" are only now beginning to be understood in general terms. Although it obviously underpins the phenomenon of language change, work over the last few years has shown it also drives the emergence of language structure itself and is therefore an integral part of the story of language evolution.

In this informal talk, I will present some of our ongoing, as yet unpublished, work attempting to understand how iterated learning leads to the emergence of language structure by recreating the transmission process in miniature in the experiment lab. My aim will be to provoke discussion of the promises, limitations, and implications of iterated learning experiments and (hopefully!) gather suggestions of what we should look at next.
The talk will follow-on from my Cognitive Science talk the previous day, but I will start with a brief recap for those of you who are unable to attend both.


Automatic lexico-semantic activation across languages: Evidence from the masked priming paradigm

Maria Dimitropoulou Basque Center on Cognition, Brain and Language

+ more

The present work is aimed at examining the extent to which a bilingual individual automatically accesses the representations in one of the known languages independently from the other, and whether such instances of automatic cross-language activation are modulated by the level of proficiency in the non-dominant language. Our findings provide evidence for the existence of strong cross-language interactions and pose certain constraints on the predictions made by models of bilingual lexico-semantic organization.


Children's Sensitivity to Pitch Variation in Language

Carolyn Quam

+ more

Children acquire consonant and vowel categories by 12 months, but appear to take much longer to learn to interpret perceptible acoustic variation. Here, we consider children's interpretation of pitch variation. Pitch operates, often simultaneously, at different levels of linguistic structure. English-learning children must disregard pitch at the lexical level--since English is not a tone language--while still attending to pitch for its other functions. Study 1 shows that 2.5-year-old English learners know pitch cannot differentiate words in English. Study 2 finds that not until age 4–5 do children correctly interpret pitch cues to emotions. Study 3 demonstrates some improvement between 2.5 and 5 years in exploiting the pitch cue to lexical stress, but continuing difficulties at the older ages. These findings suggest a late trajectory for interpretation of prosodic variation; we suggest potential explanations for this protracted time-course.


Is the 'gl' of 'glimmer', 'gleam', and 'glow' meaningful? Frequency, sound symbolism, and the mental representation of phonaesthemes

Benjamin Bergen

+ more

For the most part, the sounds of words in a language are arbitrary, given their meanings. But there are exceptions. In fact, there are two ways in which words can be non-arbitrary. For one, there can be external reasons why a particular form would go with a given meaning, such as sound symbolism. Second, there are systematicities in languages, where words with similar forms are more likely than chance to have similar meanings. Such systematic form-meaning pairings, as observed in 'gleam', 'glow', and 'glimmer', are known as phonaesthemes. But are these systematicities psychologically real, or do they merely distributional relics of language change? In this talk, I'll describe some experimental work showing that these systematic form-meaning pairings are more than distributional facts about a lexicon - they also reflect organizational characteristics of the mental representation of words, their meanings, and their parts. I'll describe a priming methodology used to test what it is that leads phonaesthemes to be mentally represented, measuring effects of frequency, cue validity, and sound symbolism.


Behavioral and neural measures of comprehension validation

Murray Singer University of Manitoba

+ more

It is proposed that memory-based processes permit the reader to continually monitor the congruence of the current text with its antecedents. Behavioral measures have indicated that these verification processes are influenced by factors including truth, sentence polarity, and discourse pragmatics. I will present converging ERP data that suggest hypotheses concerning stages of text integration within 1 second of processing.


Scalar Implicatures in American Sign Language

Kate Davidson

+ more

Recently there has been a large body of research, both experimental and theoretical, on 'scalar implicatures,' the name given to the inference in (1b) that is made by a listener when a speaker utters (1a).
(1a) Speaker Says: Some of the cookies are on the table.
(1b) Hearer Infers: Not all of the cookies are on the table.
Theoretical debate focuses primarily on whether the inference in (1b) is due to non-linguistic pragmatic reasoning about the meaning of the sentence in (1a), or due to grammatical mechanisms that include the information in (1b) as part of the compositional semantic meaning of (1a). Because the type of inference in (1) happens in a wide variety of lexical domains and stands at the interface between the linguistic content and the surrounding social context, experiments have been conducted on the timing, acquisition, and effect of context on scalar implicatures in various spoken languages, though not in a sign language.

I will be presenting the results of one completed and two ongoing behavioral experiments which investigate scalar implicatures in American Sign Language from comparative and developmental perspectives using a new video/computer felicity judgement paradigm. Comparisons between ASL and English show that while differences in one scalar domain (coordination) do not affect scalar implicature calculations, differences in another (spatial encoding in classifiers in ASL vs. non-spatial description in English) do have effects on interpretation. This work also sets a baseline comparison for data I present that test later L1 learners of ASL, who are without general cognitive impairments but often show subtle linguistic deficits due to lack of early linguistic input, and thus can help address the issue of linguistic vs. social knowledge required for scalar implicatures.


Recreating Duality of Patterning in the Lab: A new experimental paradigm for studying the emergence of sub-lexical structure

Alex Del Giudice (in collaboration with Simon Kirby & Carol Padden)

+ more

I will present results of 3 pilot experiments in a paradigm that explores the development of sub-lexical structure. In this paradigm, human participants learn a lexicon of visual symbols produced with a digitizing stylus such that the mapping from the stylus to the screen is restricted, minimizing the use of orthographic characters or pictographs. Each participant learns and recreates the set of symbols, and these recreations are transmitted to the next participant in a diffusion chain through a process of iterated learning. The iterated learning paradigm allows us to observe evolution of a "cultural" behavior such that no single participant is the driver of innovation and selection; instead the behavior is cumulatively developed across individuals.

We observe the transition of the lexicon from a relatively idiosyncratic set of abstract forms and form-meaning pairs to a set of symbols that show compelling evidence for sub-lexical structure, independent of meaning. As the language changes (an inescapable result of several factors), items in the lexicon of symbols begin to converge until a series of generations appears to analyze symbols as containing discreet sub-elements. This analysis leads to such sub-elements pervading the lexicon. These sub-elements are comparable to phonological units of spoken and signed languages.


The Physiology of Lateralization: Reviewing Why Brain Size REALLY Matters

Ben Cipollini

+ more

A common method for studying the cerebral cortex is to associate a cognitive function with a location in the cortex. Such associations allow us to use imaging data to make educated guesses about what cognitive functions are involved in a task, and to use physiological data to suggest relationships between anatomically related functions.

Lateralization is a special case of this method. By identifying a function as "dominant" in one hemisphere, we can attempt to relate it to other functions associated with the same hemisphere, to contrast it with functions associated with the opposite hemisphere, or to relate it to anatomical differences between the hemispheres.

This talk will review data and theory in experimental and theoretical neuroscience to motivate the method of associating function with location. From there, further data will be reviewed to highlight
important caveats in our current understanding of the physiology of lateralization.

Theory of mammalian brain scaling, physiology of lateralization, and a specific focus on what we do (and do not) know about the corpus callosum will be discussed, with the goal to paint a coherent picture of what we know about the physiology of lateralization and how we can interpret experimental results within the limits of that knowledge.


Stress Matters: Effects of Anticipated Lexical Stress on Silent Reading

Charles Clifton, Jr. University of Massachusetts, Amherst

+ more
I will present findings from two eye-tracking studies designed to investigate the role of metrical prosody in silent reading. In Experiment 1, subjects read stress-alternating noun-verb homographs (e.g. PREsent, preSENT) embedded in limericks, such that the lexical stress of the homograph, as determined by context, either matched or mismatched the metrical pattern of the limerick. The results demonstrated a reading cost when readers encountered a mismatch between the predicted and actual stress pattern of the word. Experiment 2 demonstrated a similar cost of a mismatch in stress patterns in a context where the metrical constraint was mediated by lexical category rather than by explicit meter. Both experiments demonstrated that readers are slower to read words when their stress pattern does not conform to expectations. The data from these two eye-tracking experiments provide some of the first on-line evidence that metrical information is part of the default representation of a word during silent reading and plays a role in controlling eye movements.


The Physiology of Lateralization: Reviewing Why Brain Size REALLY Matters

Ben Cipollini

+ more

A common method for studying the cerebral cortex is to associate a cognitive function with a location in the cortex. Such associations allow us to use imaging data to make educated guesses about what cognitive functions are involved in a task, and to use physiological data to suggest relationships between anatomically related functions.

Lateralization is a special case of this method. By identifying a function as "dominant" in one hemisphere, we can attempt to relate it to other functions associated with the same hemisphere, to contrast it with functions associated with the opposite hemisphere, or to relate it to anatomical differences between the hemispheres.

This talk will review data and theory in experimental and theoretical neuroscience to motivate the method of associating function with location. From there, further data will be reviewed to highlight
important caveats in our current understanding of the physiology of lateralization.

Theory of mammalian brain scaling, physiology of lateralization, and a specific focus on what we do (and do not) know about the corpus callosum will be discussed, with the goal to paint a coherent picture of what we know about the physiology of lateralization and how we can interpret experimental results within the limits of that knowledge.


Early experience with language really matters: Links between maternal talk, processing efficiency, and vocabulary growth in diverse groups of children

Anne Fernald (Stanford)

+ more

Research on the early development of cognition and language has focused primarily on infants from middle-class families, excluding children from less advantaged circumstances. Why does this matter? Because SES differences are robustly associated with the quantity and quality of early cognitive stimulation available to infants, and early cognitive stimulation really does matter. Longitudinal research on the development of fluency in language understanding reveals relations between processing speed in infancy and long-term outcomes, in both high-SES English-learning children and low-SES Spanish-learning children. But by 18 months, we find that low-SES children are already substantially slower in processing speed as well as vocabulary growth. It turns out that differences in early experience with language contribute to the variability observed in children’s efficiency in real-time processing. Within low-SES families, those children whose mothers talked with them more learned vocabulary more quickly – and they also made more rapid gains in processing speed. By examining variability both within and between groups of children who differ in early experience with language, we gained insight into common developmental trajectories of lexical growth in relation to increasing processing efficiency, and discovered environmental factors that may enable some children to progress more rapidly than others.


Tips of the slongue: Using speech errors as a measure of learning

Jill Warker

+ more

Adults can learn new artificial phonotactic constraints (e.g., /f/ only occurs at the beginning of words) by producing syllables that contain those constraints. This learning is reflected in their speech errors. However, how quickly evidence of learning appears in errors depends on the type of constraint. Second-order constraints in which the placement of a consonant depends on another characteristic of the syllable (e.g., /f/ occurs at the beginning of words if the vowel is /I/) require a longer learning period. I will present a series of experiments using speech errors as an implicit measure of learning that investigate the characteristics underlying second-order phonotactic learning, such as whether there are limits on what types of dependencies can be learned, whether consolidation plays a role in learning, and how long the learning lasts.


Default units in language acquisition

David Barner

+ more

When asked to “find three forks” adult speakers of English use the noun “fork” to identify units for counting. However, when number words (e.g., three) and quantifiers (e.g., more, every) are used with unfamiliar words (“Give me three blickets”) noun-specific conceptual criteria are unavailable for picking out units. This poses a problem for young children learning language, who begin to use quantifiers and number words by age two, despite knowing a relatively small number of nouns. Without knowing how individual nouns pick out units of quantification – e.g., what counts as a blicket – how could children decide whether there are three blickets or four? Three experiments suggest that children might solve this problem by assigning “default units” of quantification to number words, quantifiers, and number morphology. When shown objects broken into arbitrary pieces, 4-year-olds in Experiment 1 treated pieces as units when counting, interpreting quantifiers, and when using singular-plural morphology. Experiment 2 found that although children treat object-hood as sufficient for quantification, it is not necessary. Also sufficient for individuation are the criteria provided by known nouns. When two nameable things were glued together (e.g., two cups), children counted the glued things as two. However, when two arbitrary pieces of an object were put together (e.g., two parts of a ball), children counted them as one, even if they had previously counted the pieces as two. Experiment 3 found that when the pieces of broken things were nameable (e.g., wheels of a bicycle) 4-year-olds did not include them in counts of whole objects (e.g., bicycles). We discuss the role of default units in early language acquisition, their origin in acquisition, and how children eventually acquire an adult semantics identifying units of quantification.


(Eye)tracking multiple worlds

Gerry Altmann University of York, UK

+ more

The world about us changes at an extraordinary pace. If language is to have any influence on what we attend to, that influence has to be exerted at a pace that can keep up. In this talk I shall focus on two aspects of this requirement: The speed with which language can mediate visual attention, and the fact that the cognitive system can very efficiently make up for the fact that, to be expedient (i.e. to keep up with the changing world) we do not in fact refer to all the changes that are associated with, or entailed, by an event. Rather, we infer aspects of those changes. One example of this is through elaborative inference, and another is through the manner in which we track (often unstated) changes in the states of objects as those objects undergo change. The talk will conclude with data suggesting that multiple representations of the same object in different event-dependent states may compete with one another, and that this competitive process may bring both costs and benefits.


Are grammatical constructions meaningful? What mouse-tracking tells us.

Benjamin Bergen

+ more

All languages display systematic patterns of grammar. These "grammatical constructions" serve to organize words. But on some theoretical accounts (e.g. Goldberg, 1995) they do more than this - they also contribute to the meaning of the utterances they occur in. For instance, the English prepositional dative (1) and double-object constructions (2) have been argued to encode slightly different meanings; the dative ostensibly encodes motion along a path, while the double-object construction encodes transfer of possession (Langacker, 1987).

1. I'm sending the book to my brother. 2. I'm sending my brother the book.

I'll report on two studies that experimentally investigate the effects that hearing a sentence with one construction or another has on language comprehenders. Both studies use mouse-tracking to measure physical responses that comprehenders make subsequent to sentence processing. The first addresses the purported differences between the prepositional dative and double-object constructions, and the second compares active and passive constructions. In both, we find that the grammatical construction used affects how comprehenders subsequently move their bodies, which suggests that constructions may contribute to the process of meaning construction.


Modeling OCP-Place with the Maximum Entropy Phonotactic Learner

Rebecca Colavin

+ more

Modeling speaker judgments has been marked recently by the advent of models that assume distinctive features and natural classes as the representational elements of phonotactic processing. We investigate the performance of one such model, the Hayes and Wilson (2008) Maximum Entropy (MaxEnt) Phonotactic Learner, and show that the model fails to make the generalizations necessary to predict speaker judgments for a language where a complex constraint is active and furthermore, that in some cases, the relationship between gradient speaker judgments and the statistics of the lexicon is not transparent.

Hayes & Wilson’s learner defines a set of natural classes based on distinctive feature and learns a set of weighted phonotactic constraints by iterating between (i) weighting an existing set of constraints according to the principle of Maximum Entropy, and (ii) adding new constraints based on their Observed/Expected (O/E) ratios given the current constraint set, starting with low ratios and moving incrementally higher.

We tested the MaxEnt learner on data from Amharic, a Semitic language. Like other Semitic languages Amharic verb roots show OCP violations for place of articulation (Bender & Fulass 1978, Rose & King 2007). Homorganic consonants occur less often in a verb root than expected if they co-occurred freely (Greenberg 1950, McCarthy 1994, Buckley 1997, Frisch, Pierrehumbert & Broe 2004). OCP-Place in a Semitic language poses two distinct challenges. (1) Constraint length. OCP-Place restrictions span up to three consonants. (2) Gradiency. OCP-Place restrictions in Semitic languages are stronger in some word positions and for some places of articulation than others.We trained the MaxEnt learner on a corpus of 4242 Amharic verb roots drawn from Kane (1990), and compared the learner’s performance to the judgments of nonce verb roots. Judgment data were collected from 20 native Amharic speakers, who were asked to rate the acceptibility of 270 nonce verb roots, balanced for presence/absence of constraint violation, observed/expected ratio, transitional probability, expected probability, and density. 90 nonce roots contained OCP violations. The design was similar to that for Arabic in Frisch & Zawaydeh (2001) and the results showed that speakers assigned lower ratings to nonce forms with OCP violations.

We investigated the claim in Hayes and Wilson (2008) that grammars that achieve greatest explanatory coverage (as measured by assigning a high log-likelihood to the lexicon) are also those that best predict speaker judgments of nonce forms. We evaluated automatically learned grammars of many different sizes as well as a hand-written grammar whose constraints were chosen from those available to the automatic learner so as to embody OCP-Place restrictions on the co-occurrence of similar and identical consonants within a verb root. The constraints of the hand-written grammar were assigned weights via MaxEnt. The predictions of each model were compared to the Amharic native speaker judgments and the (cross-validated) log-likelihood they assigned to the learning data.

The correlation between the predictions of the hand-written grammar was higher than for the best learned grammar (r = 0.47 and r = 0.34 respectively). However, the grammars that best predicted speaker judgments were not those with the highest log-likelihood; the correlations between speaker judgments and model predictions peaked with grammars of medium size while log-likelihood continued to grow substantially before leveling off.

Regarding the difference in performance between the hand-written and automatically learner grammars, our results indicate that the MaxEnt learner seems to show a stronger bias toward selecting constraints that involve aggressive generalization than speaker-judgment data suggest. For a given level of accuracy (Observed/Expected ratio), the learner's generalization heuristic selects short constraints over longer ones. A majority of the constraints that are acquired first span only one or two segments and capture statistical regularities of the lexicon other than OCP-Place. As the model proceeds towards longer constraints (such as the OCP-Place constraints that constitute the hand-written grammar), OCP-Place restrictions are weakened by the effect of the previously learned non-OCP restrictions and are less likely to be selected. Crucially, this suggests that to model phonotactic acquisition, constraint learning must allow either direct acquisition of high level generalizations such as those recognized by generative phonology, or some mechanism whereby constraints learned early can be eliminated from the grammar if a more general, albeit longer, constraint is found. Finally, the misalignment between model predictiveness and the log-likelihood of the learning data suggests that there are still open questions regarding the nature of the relationship between the statistics of the lexicon and speaker judgments.


Fixation durations in first-pass reading reflect uncertainty about word identity

Nathaniel Smith

+ more

Many psycholinguistic properties believed to affect reading time, like word frequency or predictability, are dependent on the identity of the word being read. But due to sensory and other forms of noise, we would not expect the processor to have perfect information about the identity of the word to be processed -- especially during early stages of processing. We construct a simple Bayesian model of visual uncertainty during reading, and show that, in at least some cases, the processor marginalizes over possible words to produce a "best guess" of the predictability of the word being read.


Can lexical selection explain the consequences of bilingualism?

Elin Runnqvist University of Barcelona and University Pompeu Fabra

+ more

Recent research has revealed both positive (better performance in some tasks requiring executive control) and negative (slower naming latencies in picture naming tasks) consequences of bilingualism. A common explanation for these phenomena is related to an inhibitory account of the process of lexical selection: in order to achieve successful selection of the lexical representations in the intended language, the activation of those representations corresponding to the other language needs to be suppressed. This constant use of inhibitory control could explain why bilinguals outperform monolinguals in tasks requiring executive control and also why they are slower in selecting the correct word during speech production. However, while there is evidence for activation of both languages of a bilingual in the process of speech, it is not clear that the activity of the non-target language interferes with the communicative goal during lexical selection. The aim of the studies presented in this talk was to test the inhibitory account of bilingual lexical selection. Our results are easiest explained within a model that does not make use of inhibition, suggesting that lexical selection is not the cause of the advantages and disadvantages of bilingualism.


Language-specific and universal patterns of accentuation and narrow focus marking in Romani

Amalia Arvaniti

+ more

In this talk I present a first sketch of the intonation and rich accentuation and focus marking devices of Komotini Romani, a variety of Vlach Romani spoken in Thrace, the northeast region of Greece. The analysis is based on data from spontaneous conversations and story-telling involving several Romani speakers. These data show that Komotini Romani uses two cross-linguistically unusual features to mark focus. First, focus can be indicated by a non-metrically motivated stress-shift. Second, changes in accentuation are frequently co-produced with word order changes, the focus particle da (borrowed from Turkish), or stress-shift, while several of these devices can be used concurrently on occasion. These data show that focus marking strategies additional to those already known may be available cross-linguistically, such as the stress-shift of Romani. In addition, Romani can be added to the small number of languages that have a large repertoire of focus markers and tend to use them concurrently. In this respect, these data argue against a strong interpretation of the “minimality condition” recently proposed by Skopeteas & Fanselow (to appear) regarding focus marking strategies, according to which less complex structures are preferred to more complex ones (if both available in a given language) following a markedness scale from lightest to most structurally complex: in situ (prosody) < reordering < cleft. Komotini Romani clearly does not follow this scale, marking focus both prosodically and syntactically (or morphologically) on most occasions. Nevertheless, clefting is indeed extremely rare in this variety. I argue that this is because the possibilities afforded Romani by the combination of prosodic devices and word-order changes make clefting unnecessary, thus indirectly validating the scale of Skopeteas & Fanselow.


ERP Studies of Sarcasm Comprehension

Seana Coulson

+ more

Calvin: Moe. Give me my truck back. It's not yours.
Moe: It is now. You gave it to me.
Calvin: I didn't have much choice did I?! It was either give up the truck or get punched!
Moe: So?
Calvin: So I only "gave" it to you because you're bigger and meaner than me!
Moe: Yeah? So?
Calvin: The forensic marvel has reduced my logic to shambles.
Moe: You're saying you changed your mind about getting punched?
--Bill Watterson

Calvin's last utterance in this exchange is an example of discourse irony, a genre of speech in which the content of the meta-message contrasts with that of the message. In this talk, I will sketch an account of the meaning construction operations involved in sarcasm, and consider its compatibility with the cognitive neuroscience literature on the comprehension of sarcastic utterances. I will briefly review ERP studies of sarcasm comprehension, and describe recent studies in my lab on this topic.


The relationship between sound and meaning in spoken language

Lynne Nygaard Emory University

+ more

A fundamental assumption regarding spoken language is that the relationship between the sound structure of spoken words and semantic or conceptual meaning is arbitrary. Although exceptions to this arbitrariness assumption have been reported (e.g., onomatopoeia), these instances are thought to be special cases, with little relevance to spoken language and reference more generally. In this talk, I will review a series of findings that suggest that not only do non-arbitrary mappings between sound and meaning exist in spoken language, but that listeners are sensitive to these correspondences cross-linguistically and that non-arbitrary mappings have functional significance for language processing and word learning. These findings suggest a general sensitivity to cross-modal perceptual similarities may underlie the ability to match word to meaning in spoken language.


Do Comprehenders Benefit When Their Interlocutors Repeat Their Labels and Structures?

Victor Ferreira

+ more

It is well established that speakers repeat their interlocutors’ words (Brennan & Clark, 1996) and structures (Pickering et al., 2000). But do comprehenders benefit if speakers use words and structures the comprehenders just used? A simple benefit in one-shot communicative interchanges has never been demonstrated.

Three experiments explored this issue. In each, subjects described or chose lexical or syntactic pictures that allowed more than one description. The experiments used a prime-target paradigm. For lexical pictures, on (non-filler) prime trials, subjects described a lexical picture how they wished. On target trials, subjects saw two pictures, one of which was the same as the prime. A confederate was scripted to describe that picture with either the same label or the other label. For syntactic pictures, on prime trials, subjects described a syntactic picture how they wished. On target trials, subjects saw two pictures; both had the same subject and verb but different objects, and the verb (but nothing else) was the same as in the prime. A confederate described one picture with either the same structure or the opposite structure. We measured latencies to select the described picture.

Experiment 1 explored the basic effect. Subjects chose pictures faster if the confederate repeated their labels (lexical trials) or structures (syntactic trials – even though prime and target sentence content differed!) compared to when the confederate did not. Thus, comprehenders do benefit when their interlocutors use the same labels and syntactic structures as they themselves just used.

Experiment 2 assessed whether the effect is partner specific. Half of trials were like in Experiment 1. For the other half, the computer (not the confederate) described targets. If benefits are observed even with computer descriptions, the effect is not partner specific (because computers can’t hear!). Subjects again chose pictures faster when they heard their own labels or syntactic structures repeated to them, both for confederate and for computer descriptions (for syntactic pictures, a benefit was not observed with computer descriptions in the original experiment, but was in a replication). Thus, benefits are not partner-specific.

A concern is that because subjects freely chose descriptions, the lexical effects might come from subjects’ preferences – subjects might think “fishtank” is an unusual name for the target. In Experiment 3, subjects came in one week and described all pictures. The next week, half of trials were prime-target, like in Experiment 2 (with computer descriptions). For the other half, primes were omitted, and targets were described with the same or other label or structure the subject used a week ago. Priming benefits were observed. But for lexical pictures, subjects were not faster if targets were described with the same label as they used the previous week (for syntactic pictures they were). Thus, priming effects can’t be reduced to preference effects.

Overall, comprehenders select pictures faster if they hear their own just-produced labels or syntactic structures. This isn’t partner specific, and it’s not because subjects prefer particular picture labels. This suggests repeating words and structures benefits communication.


What can ERPs and fMRI tell us about language comprehension? Streams of Processing in the Brain

Gina Kuperberg

+ more
(Tufts University, Department of Psychology; Department of Psychiatry, Mass General Hospital; Martinos Center for Biomedical Imaging)

Traditional models of sentence comprehension have generally focused on the syntactic mechanisms by which words are integrated to construct higher order meaning. The assumption here is that single words are retrieved from the lexicon and then combined together through their syntactic representations. Any material stored within semantic memory, beyond the single word, is assumed to exert its influence by directly influencing syntactic combination or during a later phase of processing. I will discuss data using event-related potentials (ERPs) and functional Magnetic Resonance Imaging (fMRI) studies of language comprehension that challenge such assumptions. I will suggest that word-by-word syntactic-based combination operates in parallel with semantic memory-based mechanisms, giving rise to a highly dynamic system, with additional analysis occurring when the outputs of these distinct but interactive neural streams of processing contradict one another. The parallel operation of these processing streams gives rise to a highly dynamic interactive and balanced system that may be a fundamental aspect of language comprehension, ensuring that it is fast and efficient, making maximal use of our prior experience, but also accurate and flexible in the face of novel input. Indeed, it may be a more general feature of comprehension outside language domain: I will present data suggesting that analogous streams of processing may be engaged during our comprehension of real-world visual events, depicted in short, silent video-clips. Finally, I will suggest that imbalances between semantic memory-based and combinatorial streams of processing may help explain patterns of language abnormalities in various disorders. In particular, I will briefly discuss the syndrome of schizophrenia – a common neuropsychiatric disorder in which language processing can be dominated by semantic associations, at the expense of syntactic-based combination, possibly leading to symptoms of psychosis.


The Electrophysiology of Speech Production: “It Is Time, Time Matters.”

Kristof Strijkers University of Barcelona / University Pompeu Fabra

+ more

Knowledge of the speed with which we process the core structures involved in speech production and the temporal relation between these different mental operations is vital for our understanding how we are able to speak. However, the time-course involved in speech production hasn’t received much attention in the literature and most of the chronometric information is derived from indirect and rather complex tasks (e.g., Indefrey & Levelt, 2004). In the present talk I aim to fill this gap by combining the fine temporal resolution of ERPs with simple overt picture naming. In particular, the electrophysiological signature in response to word retrieval will be explored. In order to obtain reliable time-course information, different lexical variables were manipulated in these tasks and contrasted to each other in the ERPs. In one such study we investigated frequency and cognate effects during overt picture naming and observed that both lexical variables elicited ERP differences starting ~185 ms after picture onset (Strijkers, Costa & Thierry, 2009). The frequency and cognate effects seemed especially sensitive to an early positive going ERP with its peak around 200 ms (P2) and a maximal scalp distribution at bilateral posterior sites. In the remainder of the talk I will present (a) some data exploring possible confounds/alternative explanations for these initial results; (b) a few experiments seeking convergent evidence using different manipulations; and (c) a picture naming study trying to characterize not only the onset but also the duration of word retrieval. The presented data reveal that the brain engages very quickly in the retrieval of words one wishes to utter and offers a clear time-frame of how long it takes for the competitive process of activating and selecting words in the course of speech to be resolved. These new steps towards a temporal map of speech may provide valuable and novel insights for understanding this remarkable human ability.


Structural Commonalities in Human and Avian Song

Adam Tierney

+ more

While many aspects of human song vary cross-culturally, other features are widespread. For example, song phrases tend to follow an arch-like pitch contour, the final note of a phrase tends to be longer than the others, and large jumps in pitch tend to be followed by pitch movements in the opposite direction. One possible explanation for these regularities is that they are somehow genetically specified. Alternatively, the patterns could be a consequence of bodily constraints. If so, they should be found in the songs of birds as well, as both humans and birds produce songs using vibrating vocal folds driven by a pulmonary air source. Here we show that all three of these patterns are present in birdsong. We encoded the most taxonomically diverse set of birdsongs analyzed to date (from 54 families) as sequences of discrete pitches. The skip-reversal pattern and final lengthening were present at the level of the entire birdsong, while the arch contour was present at the level of the individual note, suggesting (as birds breathe between notes) that it is tied to the breath cycle. Furthermore, we found these patterns in spoken sentences from four different languages and instrumental classical themes written by composers from five different countries. Our results demonstrate that diverse communicative domains share a wide variety of statistical patterns, the result of shared bodily constraints. The auditory system likely takes advantage of the existence of these patterns, as they mark the beginnings and the ends of notes and phrases and have presumably been present for as long as the vocal apparatus has existed.


Fore-words: Prediction in language comprehension

Kara Federmeier University of Illinois

+ more
Accumulating evidence attests that, during language comprehension, the brain uses context to predict features of likely upcoming items. However, although prediction seems important for comprehension, it also appears susceptible to age-related deterioration and can be associated with processing costs. The brain may address this trade-off by employing multiple processing strategies in parallel, distributed across the two cerebral hemispheres. In particular, we have shown that left hemisphere language processing seems to be oriented toward prediction and the use of top-down cues, whereas right hemisphere comprehension is more bottom-up, biased toward the veridical maintenance of information. Such asymmetries may arise, in turn, because language comprehension mechanisms are integrated with language production mechanisms only in the left hemisphere (the PARLO framework).


Giving Speech a Hand: Neural processing of co-speech gesture in native English speakers and Japanese-English bilinguals as well as typically-developing children and children with autism

Amy L. Hubbard

+ more

Successful social communication involves the integration of simultaneous input from multiple sensory modalities.  Co-speech gesture plays a key role in multimodal communication, its effects on speech perception having been demonstrated on the behavioral and neural level (cf. McNeill, 2005; Willems et al., 2007).  We used an ecologically valid fMRI paradigm to investigate neural responses to spontaneously produced beat gesture and speech.  In our first study, we found that adult native English speakers show increased activity in superior temporal gyrus and sulcus (STG/S) while viewing beat gesture in the context of speech (versus viewing a still body or nonsense movements in the context of speech).   In our second study, we again observed increases in the BOLD signal in STG/S while Japanese ESL speakers viewed beat gesture in the context of speech (as compared to viewing a still body or gesture tempo in the context of speech).  These data suggest that co-speech gesture is processed (and/or integrated) in areas known to underlie speech perception, and meaningfulness of co-speech gesture is linked to its embodiment.  In our third study, we examined co-speech gesture processing in children with Autism Spectrum Disorder (ASD; a developmental disorder characterized by excessive deficits in social communication) and typically developing children.  Similar to our adult subjects, our typically developing matched controls showed increased activity in STG/S for viewing co-speech gesture (versus a still body with speech).  However, children with ASD showed no increases in STG/S for this same contrast.  These findings suggest that speech and gesture contribute jointly to communication during social interactions and that neural processes underlying co-speech gesture processing are disrupted in a clinical disorder well-known for its deficits in social communication.


A new model of local coherences as resulting from Bayesian belief update

Klinton Bicknell
Joint work with Roger Levy UC San Diego & Vera Demberg University of Edinburgh

+ more

Most models of incremental sentence processing assume that the processor does not consider ungrammatical structures. However, Tabor, Galantucci, and Richardson (2004) showed evidence of cases in which a syntactic structure that is ungrammatical given the preceding input nevertheless affects the difficulty of a word, termed local coherence effects. Our work fills two gaps in the literature on local coherences. First, it demonstrates from two experiments with an  eye-tracking corpus that local coherence effects are evident in the reading of naturalistic text, not just rare sentence types like Tabor et al.'s. Second, it specifies a new computational model of local coherence effects under rational comprehension, proposing that local coherences arise as a result of updating bottom-up prior beliefs about the structures for a given string to posterior beliefs about the likelihoods of those structures in context. The critical intuition embodied in the model is that larger updates in probability distributions should be more processing-intensive; hence, the farther the context-conditioned posterior is from the unconditioned prior, the more radical the update required and the greater the processing load. We show that an implementation of our model using a stochastic context-free grammar (SCFG) correctly predicts the pattern of results in Tabor et al.


Use of orthographic and phonological codes in early word recognition and short-term memory by skilled and less skilled deaf readers of French

Nathalie Bélanger

+ more

A small proportion of profoundly deaf individuals attain expert reading skills and it is important to understand why they become skilled readers and other deaf people do not. Despite the hypothesis that good phonological processing skills during reading are associated with good reading skills in deaf readers (Perfetti & Sandak, 2000), research has not yet provided clear answers as to whether it is the case or not. We investigated skilled and less skilled severely to profoundly deaf adult readers’ use of phonological codes during French word recognition and recall. A group of skilled hearing readers was also included as a means of comparison to existing literature. Given the close mapping of orthographic and phonological information in alphabetical writing systems, the unique contribution of orthographic codes was also investigated. Bearing in mind the particular focus on phonological processing in deaf (and hearing) readers and the potential implications on reading education for deaf children, it appears crucial to ensure that effects of orthographic and phonological information during word processing and recall are disentangled in this population. Results from a masked primed lexical decision task where orthographic and phonological overlap between primes and targets was manipulated show no difference between skilled hearing, skilled deaf and less skilled deaf readers in the way they activate orthographic and phonological information during early word recognition. The same groups of participants also performed a serial recall task where words were orthographically and phonologically similar (pierre, lierre, erre, etc), orthographically dissimilar and phonologically similar (chair, clerc, bière, etc), or orthographically and phonologically unrelated (ventre, manchot, oreille, etc). Skilled hearing readers showed a robust phonological similarity effect, but neither group of deaf readers (skilled or less skilled readers) did. All participants showed an advantage in recalling words that were orthographically and phonologically similar over the words that were orthographically dissimilar and phonologically similar suggesting that orthographic codes are also used to maintain words in short-term memory. The results of these two studies will be discussed and contrasted and will be presented in the context of reading instruction for deaf children.


How the conceptual system gets started and why it might interest image-schema theorists

Jean Mandler

+ more

A good case can be made that the foundations of the conceptual system rest on a small number of spatial primitives. Object concepts (animal, vehicle), relational concepts (in, out), and abstract concepts (cause, goal) all begin on a purely spatial basis and can easily be represented by spatial image-schemas. Only later in development do concepts accrue bodily associations, such as feelings of force and motor information. Bodily feelings enrich concepts but their representation remains crude and less structured than spatial representation. I suggest that simulations used to understand events rely primarily on spatial image-schemas and do not necessarily, include bodily feelings.


Enemies and friends in the neighborhood: cascaded activation of word meaning and the role of phonology

Diane Pecher (with René Zeelenberg) Erasmus University Rotterdam

+ more
Many models of word recognition predict that orthographic neighbors (e.g., broom) of target words (e.g., bloom) will be activated during word processing. Cascaded models predict that semantic features of neighbors get activated before the target has been uniquely identified. This prediction is supported by the semantic congruency effect, the finding that neighbors that require the same response (e.g., living thing) facilitate semantic decisions whereas neighbors that require the opposite response (e.g., non-living thing) interfere with semantic decisions. In a recent study we investigated the role of phonology by manipulating whether orthographic neighbors had consistent (broom) or inconsistent phonology (blood). Congruency effects in animacy decision were larger when consistent neighbors had been primed than when inconsistent neighbors had been primed. In addition, semantic congruency effects were larger for targets with phonologically consistent neighbors than to targets with phonologically inconsistent neighbors. These results in line with models that assume an important role for phonology even in written word recognition (e.g., Van Orden, 1987).


Do lexical-syntactic selection mechanisms have rhythm?
Yet another "that" experiment

Vic Ferreira (with Katie Doyle and Tom Christensen)

+ more

Speech tends to be rhythmic, alternating strong and weak syllables. To promote alternation, speakers (of English, at least) change *how* they say things ("thirTEEN," but "THIRteen MEN"), but will they change *what* they say? Perhaps not. Words and structures may be selected only to convey speakers' messages. And, phonological information may become available too late to influence lexical and syntactic selection. In two experiments, speakers produced sentences like, "NATE mainTAINED (that) ERin DAmaged EVery CAR in SIGHT" or "NATE mainTAINED (that) irENE deSTROYED the BUMper ON the TRUCK." The optional "that," a weak syllable, would promote stress alternation if mentioned in the first sentence and omitted in the second. Speakers in Experiment 1 produced sentences from memory, and said "that" about 6% more in the first type of sentence than the second. But, memory involves comprehension and production, and evidence suggests that comprehension more than production prefers alternating stress. So speakers in Experiment 2 produced sentences by combining simple sentences into complex ones; now, no difference is observed. This suggests that in extemporaneous production, speakers do not choose words and structures to promote alternating stress.


The hand that rocks the cradle rules the brain

Tom Bever

+ more

Fifty years of behavioral and clinical research supports the hypothesis that right handers with familial left handedness (RHFLH) have a distinct patterns of language behavior, which may reflect differences in neurological organization of the lexicon. RHFLH people organize their language processing with relative emphasis on individual words, while RHFRH people are more reliant on syntactic patterns. Recent fMRI studies support the idea that RHFLH people may access words more easily than RHFRH people because their lexicon is more bilaterally represented: syntactic tasks elicit left hemisphere activation in relevant areas for all subjects; corresponding lexical/semantic tasks elicit left hemisphere activation in RHFRH people, but bilateral representation in RHFLH people. This suggests that, while syntactic representation is normally represented in the left hemisphere, lexical information and access can be more widespread in the brain. This result has implications for clinical work and interpretation of many clinical and neurolinguistics studies that fail to differentiate subjects’ familial handedness. It also is suggestive about the language-specific neurological basis for syntax, amidst a more general basis for the lexicon.


Are the Literacy Challenges of Spanish-English Bilinguals Limited to Reading?

Darin Woolpert

+ more

Spanish-English bilinguals (SEBs) represent 9% of students in U.S. schools. In California alone, we have 1.3 million SEB students - more than a third of that total. These children have well-established academic struggles, with literacy being a particular concern (Grigg, Donahue, & Dion, 2005; Lee, Grigg, & Donahue, 2007; Restrepo & Gray, 2007). These reading problems persist throughout their academic careers, with SEB children lagging behind their monolingual English (ME) peers in pre-literacy skills such as phonological awareness (FACES 2000, 2003), and those that graduate high school do so reading, on average, at the 8th grade level (Donahue, Voekl, Campbell, & Mazzeo, 1999). A great deal of research has focused on early emerging literacy skills (e.g., Dickinson, McCabe, Clark-Chiarelli, & Wolf, 2004; Rolla San Francisco, Mo, Carlo, August, & Snow, 2006), such as phonological decoding (word reading) and encoding (spelling), as this is a crucial first step towards literacy acquisition for ME children (Bialystok, Luk, & Kwan, 2005; Gough & Tunmer, 1986). Recent research, however, has suggested that later-emerging skills, such as morphosyntactic awareness and reading comprehension, are the most problematic for SEB children (August & Shanahan, 2006), leaving questions about the origins of these deficits and the best way to address them. Children with a first language of Spanish may struggle to learn to decode in English due to typological differences such as the opacity of English orthography (seen in Bernard Shaw’s suggestion of "ghoti" as an alternate spelling for "fish"). Alternatively, SEB children may be struggling to build their literacy skills on a shaky foundation of spoken English, leading to problems as they get older.

To evaluate these competing claims, we gave standardized tests of spoken (sentence repetition and vocabulary) and written language (spelling and reading) to 53 SEB students from kindergarten to second grade, as well as a spoken and written narrative task. The children performed at age level in regards to spelling and reading (i.e., early-emerging literacy). The children tested below the normal range on the sentence repetition and vocabulary tasks, however. On the narrative task, the children struggled with verb morphology in both the spoken and written domains, with no significant differences in terms of error rate between the first and second graders.

These findings support those reported by August and Shanahan, and suggest that SEB children do not have problems with word decoding, but rather struggle to acquire literacy due to a lack of proficiency with English overall. This has implications for interventions developed for ME children with reading problems, and for the issue of properly diagnosing language impairment in ME children given the language profile of SEB children (e.g., Paradis, Rice, Crago, & Marquis, 2008). Directions for future research will be discussed.


"Point to where the frog is pilking the rabbit”: Investigating how children learn the meaning of sentences

Caroline Rowland University of Liverpool, UK

+ more

A unique but universal quality of language is the fact that the structure (or form) of a sentence affects its meaning. To master a language, learners must discover how sentence structure conveys meaning - the form-function mapping problem. This task is complicated by the fact that different languages require speakers to encode different aspects of the event; for example, in Spanish and German (but not in English) a speaker can change the order of the words without necessarily changing the meaning of the sentence, in German (but not English or Spanish), nouns must be marked for case, and in Spanish (but not English or German), speakers must use a grammatical patient marker if the object affected is animate.

Despite the apparent complexity of the task, recent research suggests that certain aspects of form-function mapping are learned very early on. For example, even before two years of age, English children can use word order to identify who is doing what to whom, detecting that transitives with novel verbs such as "the rabbit is glorping the duck" must refer to a cartoon showing a rabbit acting on a duck, not one in which a duck acts on a rabbit. However, it is unclear whether early ability is limited to frequently heard, simple structures like the transitive, or extends to other, more complex ones. This has implications for the amount of knowledge we attribute to young children and how we characterise the acquisition process. In addition, previous work often focuses only on showing that young children can understand form-function mappings, without investigating what it is that may underlie their performance (e.g. what might be the nature of any innate biases, what cues to meaning are most salient in the language children hear).

In this talk, I will present a number of studies using a new forced-choice pointing paradigm to investigate 3- and 4- year-old English and Welsh children's comprehension of two structures that are less frequent and more complex than the transitive . the prepositional and double object dative. The results demonstrate that English and Welsh children have some verb-general knowledge of how dative syntax encodes meaning soon after their third birthday but that this is not always enough for successful comprehension. Cross- and within-language differences suggest that the correct interpretation of datives relies on the presence of a number of surface cues, and that the children's ability to use a cue depends on its frequency and salience in child directed speech. Implications for theories of grammar and verb learning are discussed.


Resolving Conflicting Information from First-Mention Biases and Discourse Event Structure in Ambiguous Pronoun Interpretation in a Short Story Paradigm

Anna Holt and Gedeon Deak

+ more

Making anaphoric judgments in a discourse context holds several novel challenges when compared with simple, intra-sentential anaphoric resolution. Adults use the lexical features of a pronoun (e.g. gender, animacy, and number) as the most reliable source of information for disambiguation. However, when lexical features of a pronoun are underspecified, adults use conflicting strategies with which to determine the referent of a pronoun.  Adults have a well-known preference for considering the first out of two or more entities in a sentence—often the grammatical subject and the continuing discourse topic-- as the most salient one (preferred pronoun referent) (Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000). However, recent work (Rohde, Kehler and Elman, 2007) suggest adults also use strategies which take into account event structure and discourse cohesion when determining the referent of a pronoun in an inter-sentential story completion paradigm. For instance, participants prefer interpretations consistent with ongoing action (e.g. adults spontaneously produce more goal continuations for pronouns following sentences with a perfective verb than an imperfective verb.) We tested how adults resolve conflicting cues to inter-sentential pronoun interpretation, including the first mentioned entity, the most frequently named entity, and the entity predicted by verb aspect and verb semantics. We created a set of five sentence short stories which involve two actors. The two actors participate in a short exchange using a transfer of motion verb. An ambiguous pronoun undergoes an intransitive action, and participants are asked to choose which actor is the referent of the pronoun. Throughout these stories, we vary 1) whether or not the current topic is also the initial subject of the first sentence (presence or absence of a topic switch) 2) whether or not the last-mentioned actor is also the initial subject of the first sentence and finally 3) whether the event structure predicted by the intransitive verb suggests a goal continuation from the actor in the transfer-of-motion sentence or a source continuation. We collected responses as the reaction time to choose the appropriate actor following story presentation and the percentage of choices of initial story topic. Future work will additionally collect eye-tracking data, as pronoun resolution in unambiguous situations is typically resolved with 200ms (Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000).

Lisa Rosenfelt, Christopher Barkley, Kimberly K. Belvin, Chia-lin Lee, Kara Federmeier, Robert Kluender, and Marta Kutas

No ERP evidence for automatic first-pass parsing: Pure word category violations do not elicit early negativity

Certain neurocognitive processing models [1,2] map early left anterior negativity (eLAN) onto automatic first-stage parsing—because it is elicited by purported grammatical category violations, by hypothesis interfering with initial syntactic assignments—and late positivity (P600) onto processes of reanalysis occurring later in time. Crucially, however, eLAN (followed by a P600) has been reliably elicited only by words following missing nominal heads, as in Max's __ OF [3] and im __ BESUCHT (“visited in the __”) [4]. In the latter, most common paradigm, the violation occurs when a verb replaces the expected noun. Thus noun/verb violations that do not elicit early negativity [5,6] and grammatical verb gapping that does [7,8] become relevant to the discussion.

We compared ERP responses to word category violations with (a,b: “ungapped”) and without (c: “gapped”) phrasal heads in stories that required reading for comprehension rather than monitoring for syntactic well-formedness.


In sum, violation of the expected grammatical category of an incoming word is not a sufficient (i) [5,6] condition for eliciting early negativity; it seems to be reliably elicited only in paradigms that gap phrasal heads (ii) [3,4,7,8,9]. If early negativity is sensitive to gapping rather than to grammatical category per se, it cannot be the index of an automatic first-pass parse assigning preliminary syntactic structure. Without a reliable ERP index of modular first-pass parsing, a crucial piece of neurocognitive evidence in support of serial parsing models is called into question.


Two is not better than one: The consequences of translation ambiguity for learning and processing

Natasha Tokowicz

+ more

Many words have more than one translation across languages. This so-called “translation ambiguity” arises mainly from existing ambiguities within a language (e.g., near-synonymy and lexical ambiguity) and poses a number of potential problems for language learning and on-line processing. My colleagues and I have explored these issues in a series of experiments that have tested individuals at different proficiency levels and that have used different pairs of languages. In this talk, I will summarize this research and discuss the implications of translation ambiguity for second language learning and processing, and the potential for this research to inform models of language processing more generally.


Talker information facilitates word recognition in real time

Sarah Creel

+ more

Recent interest in talker identity as a factor in language interpretation (e.g. Van Berkum et al., 2008) raises questions about how listeners store and utilize talker-specific information. Talker identification might conceivably involve processes external or orthogonal to language comprehension, and thus would only affect interpretation on a relatively slow time scale (lengthy discourse or a sentence, or after word offset). Alternately, talker identity might be readily available in the same stored representations used for word identification (Goldinger, 1998). If the latter account is correct, then listeners should be able to use talker variation not only in long-time-scale but short-time-scale language comprehension (words). In the current study, we found that talker variation affects word recognition prior to the point that speech-sound information is useful. This supports the notion that listeners represent phonemic and nonphonemic variability in conjunction, though it remains possible that separate lexical and episodic information combine to determine these effects. We are currently exploring the acoustic specificity of these representations, and the role of lengthier acoustic context on normalization.


How Our Hands Help Us Think About Space

Susan Goldin-Meadow

+ more

Language does not lend itself to talking about space.  Space is continuous, language is discrete.  As a result, there are gaps in our talk about space.  Because gesture can capture continuous information, it has the potential to fill in those gaps.  And, indeed, when people talk about space, they gesture.  These gestures often convey information not found in the words they accompany, and thus provide a unique window onto spatial knowledge.  But gestures do not only reflect a speaker’s understanding of space, they also have the potential to play a role in changing that understanding and thus play a role in learning.


Rhythm, Timing and the Timing of Rhythm

Amalia Arvaniti

+ more

The notion that languages can be rhythmically classified as stress- or syllable-timed has gained increased popularity since the introduction of various metrics -- such as the PVIs of Grabe & Low (2002) or the %V-ΔC of Ramus et al. (1999) -- that seek to quantify the durational variability of segments and use this quantification as a means to rhythmically classifying languages. Since rhythm metrics have been used extensively to support research on language acquisition and speech processing that relies on the idea of languages belonging to one of two rhythmic types, it is important to critically examine both the empirical basis and theoretical assumptions behind metrics.

I show that the success of metrics at rhythmic classification is much more modest than originally anticipated. I argue that this lack of success has its origins in the misinterpretation and simplification of Dauer's original ideas, on which metrics are said to be based, and in particular on the confounding of segmental timing with rhythm. I further argue that these problems cannot be corrected by "improving" on metrics, due to (a) the lack of independent measures associated with the notion of metrics and rhythmic classification in general, and (b) the psychological implausibility of the notion of syllable-timing in particular.

I propose that in order to understand rhythm, it is necessary to decouple the quantification of timing from the study of rhythmic structure, and return to Dauer's original conception of a rhythmic continuum ranging from more to less stress-based. A conception of rhythm as the product of prominence and patterning is psychologically plausible, and does not rely on a questionable and ultimately unsuccessful division between languages or the measuring of timing relations. A proposal along these lines and data from a production experiment are presented.


Brain Indices of Syntactic Dependencies in Japanese
Evidence for Language-Universal and Language-Specific Aspects of Neurocognitive Processing

Mieko Ueno

+ more

One of the more challenging questions in sentence processing has been whether all parsing routines are universal, or whether some can or need be language-specific. In this talk, I present a series of event-related brain potential (ERP) studies of syntactic dependencies in Japanese, including scrambling, relative clauses, and wh-questions, to shed light on this question.

Previous ERP studies of wh-movement languages such as English and German (e.g., Kluender & Kutas, 1993; King & Kutas, 1995; Fiebach et al., 2002; Felser et al., 2003; Phillips et al., 2005) report left-lateralized anterior negativity (LAN) elicited between the displaced wh-fillers and their gaps. LAN is thought to index increased verbal working memory load due to a dependency between a wh-filler and its gap. In addition, late positivity has been reported at the gap position, which is said to index the syntactic integration cost of the displaced filler (Kaan et al., 2000; Fiebach et al., 2002; Felser et al., 2003; Phillips et al., 2005).

Unlike English and German, Japanese is an SOV wh-in-situ language that allows scrambling. In addition, Japanese relative clauses are prenominal instead of postnominal, and these typological differences affect the nature of the processing demands for filler-gap dependencies in Japanese. However, despite these striking differences, the way the brain processes syntactic dependencies in Japanese looks remarkably familiar. While only known ERP components are elicited in these contexts, pointing to universality of parsing, they pattern and combine in subtly different enough ways to create a profile that on aggregate can accommodate and do justice to the language-specific features of Japanese as well.


Integrating Conceptual Knowledge Within and Across Representational Modalities

Chris McNorgan

+ more

Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected by cascading integration sites with successively wider receptive fields. Deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for pattern completion, whereas shallow models predict no difference for either task. The pattern of decision latencies across a series of complementary behavioural studies using both feature inference and pattern completion is consistent with a deep integration hierarchy.


Wendy Sandler

The Kernels of Phonology in a New Sign Language

+ more

The property of duality of patterning – the existence of two levels of structure, a meaningful level of words and sentences alongside a meaningless level of sounds – has been characterized as a basic design feature of human language (Hockett 1960). Some have also argued that phonology must have existed prior to hierarchical syntactic structure in the evolution of language (Pinker & Jackendoff 2005). Sign languages were admitted to the 'bona fide language club' only after Stokoe (1960) demonstrated that they do exhibit duality. But is it possible for a conventionalized language to exist without a fully developed phonological system – without duality?

Using evidence from a sign language that has emerged over the past 75 years in a small, insular community, I will show that phonology cannot be taken for granted. The Al-Sayyid Bedouins have a conventionalized language with certain syntactic and morphological regularities (Sandler et al 2005, Aronoff et al 2008), but the language is apparently still in the process of developing a level of structure with discrete meaningless units that behave systematically. In other words, we don't find evidence for a full-blown phonological system in this language.

Can a language go on like this?  Data from children and from families with several deaf people help to pinpoint emerging regularities and complexity at the level of meaningless formational elements in ABSL.  While phonology in language cannot be taken for granted, then, its existence in all older languages, spoken and signed, suggests that it is inevitable. Rather than assume that phonology is somehow 'given' or hard-wired, this work leads us to ask, Why and how does it arise?


Dynamics and Embodiment in Language Comprehension

Michael Spivey

+ more

There are several findings that suggest bi-directional influences between language and vision. From visual search to sentence comprehension, I will discuss experimental results suggesting that sometimes language can tell vision what to do, and sometimes vision can tell language what to do. Along with many other studies, these findings of fluid interaction point toward an account of perceptual/cognitive processing that can accommodate linguistic and visual processes in a common format of representation: a "continuity of mind", if you will.


Oral and Gestural Motor Abilities and Early Language - Talking with the Mouth

Katie Alcock

+ more

New evidence is accumulating to link both the ontogenesis and the phylogenesis of language to motor control (Arbib, 2005; Hill, 2001). This is in addition to much long-standing evidence linking communicative gesture to early language and to delays in early language (Bates et al., 1979; Thal et al., 1997).

However, most children learning language are learning a spoken language, and some children who have language-learning difficulties also have difficulties with oral motor control (Dewey et al., 1998; Gernsbacher 2008).  We set out to compare the relationships between manual gesture, early language abilities, and oral motor control, controlling for overall cognitive ability, in typically developing children aged 21 months, followed up at age 36 months (N=58).

At 21 months relationships were found between vocabulary production, production of complex language, and vocabulary comprehension (measured using a British version of the McArthur-Bates CDI, Hamilton et al. 2000), and oral motor abilities, with an additional relationship between vocabulary comprehension and memory for manual gesture sequences. After controlling for cognitive ability and SES, however, only oral motor control was related to language production, and only cognitive ability was related to language comprehension.

At 36 months concurrent relationships were found between oral motor control, imitation of meaningless gestures, and expressive and receptive language (as measured on the Preschool Language Scale).  When performance at 21 months was controlled for, 36 month expressive language was most strongly related to oral motor abilities at 36 months.  Receptive language abilities at 36 months were predicted by 21 month vocabulary comprehension, and in addition were related to meaningless manual gesture imitation ability at 36 months.

We concluded that the articulatory component of both the tests of nonverbal oral motor abilities and of the language assessments is likely to mean these assessments are measuring very closely overlapping abilities.  Children who are learning to speak with their mouths seem to either need good oral motor skills, or develop these as a result of articulatory practice.

On the other hand, imitation of meaningless gesture is likely to draw heavily on children's visuo-spatial abilities and/or executive function abilities, and hence be related to language comprehension abilities.  I will discuss in addition the potential use of early manual and oral motor assessments in predicting later language delay.


Successful Bilingualism: What the Spanish-Minors Reveal About Language Production

Tamar Gollan

+ more

Our ability to speak is one of the things that arguably makes us most different from animals. Extending this thought into a continuum would seem to place people who speak multiple languages further from monkeys than people who speak just one, and indeed many people go to great lengths to become proficient in more than one language. But high levels of proficiency in more than one language lead to some subtle but significant processing costs for dominant-language fluency. In previous talks I have told you about how early bilinguals name pictures more slowly, have reduced verbal fluency, and more naming failures (tip-of-the-tongue states), than monolinguals. It might seem that bilingual disadvantages for language tasks should primarily be attributed to interference between languages, however, I have argued that early bilingual disadvantages are (in most cases) best explained by assuming reduced frequency-of-use relative to monolinguals. In this talk I will tell you about the effects of proficient late-bilingualism on dominant-language fluency, TOT rates, and picture naming times. These data reveal a role for between-language interference in bilingual language production, and provide clues as to when competition during lexical selection is fiercest. Studies of late second-language acquisition typically focus on proficiency in the second language. In these experiments we take a different approach by focusing on how learning a second language affects your first language. Our data reveal that early and late bilingualism have some similar but also some different consequences for dominant language production. These contrasts provide clues as to what leads to successful bilingualism while also revealing the mechanisms fundamental to proficient language use in speakers of all types (mono- and bilingual).


Gradiency in Syntactic Garden-Paths Revealed by Continuous Motor Output

Thomas Farmer

+ more

On-line syntactic processes are highly dynamic and interact with other perceptual and representational systems in real time. In this talk, I present a series of studies that utilize the "visual-world" paradigm to assess how scene-based referential context impacts the resolution of structural ambiguities. In the paradigm employed here, participants click and move objects around a visual display in response to instructions that contain temporary syntactic ambiguities. To complement previous eye-tracking work, our most recent work presents continuous data from the action phase of each trial. Nonlinear trajectories were recorded from computer-mouse movements, providing a dynamic and continuous dependent measure that can reveal subtle gradations in the resolution of temporarily ambiguous "garden-path" sentences. Analysis of movement trajectories revealed that when a scene-based context supports the incorrect analysis of the ambiguity, movement trajectories curve subtly toward a location on the screen corresponding to the incorrect interpretation, before terminating at the ultimately correct display-location. When the visual context supports the ultimately correct interpretation, however, no commensurate curvature is observed. These effects, evident in the dynamic data produced within each trial, fail to support a characterization of syntactic structure interpretation as an all-or-nothing process in which a discrete re-analysis either will or will not be required. Instead, they highlight the gradiency in the degree to which correct and incorrect syntactic structures are pursued over time, thus providing support for competition-based accounts of constraint-based sentence processing.


Child-Driven Language Input:  Insights from Young Signing Children

Amy Lieberman

+ more

Successful communication in sign language requires individuals to maintain visual attention, or eye gaze, with their interlocutors. For deaf children, visual attention is the source of both linguistic and non-linguistic input, thus the ability to obtain and maintain attention becomes a crucial factor in language development. This study explored initiations and turn-taking behaviors among deaf children and deaf and hearing adults in a classroom setting in which ASL was the primary mode of communication. Analysis of peer interactions revealed that children used objects, signs, and conventional attention-getters (e.g. waving or tapping) to obtain attention. Analysis of individual differences showed a high correlation between age, language proficiency, and the number and type of initiations attempted. By the age of two, children exhibited the ability to actively manage their own communication using complex attention-getting and turn-taking behaviors. These findings suggest that early and consistent exposure to sign language enables children to develop the meta-linguistic skills necessary for interaction in a visual language.


Do Phonological Awareness and Coding Predict Reading Skill in Deaf Readers? A Meta-Analysis

Alex Del Giudice

+ more

Phonological awareness, or coding, skills are hypothesized to play a key role in reading development for readers who hear, although the direction and size of these effects is controversial. We investigated the relation between phonological awareness/coding skills and reading development in readers who are deaf with a meta-analysis. From an initial set of 230 relevant publications addressing this question, we found 25 studies that measured the relationship directly and experimentally. Our analyses revealed that the average relationship of phonological awareness/coding to reading level in readers who are deaf is low to medium in size, though variability is high. Variables such as experimental task, reading measure, and reader characteristics can explain the variation across study results. The small and unreliable relation between phonological awareness/coding and reading in the deaf population suggest that it plays a minor role in their reading achievement.


The influence of plausibility on eye movements during reading

Keith Rayner

+ more

It is a well-known and highly robust finding that word frequency and word predictability have strong influences on how long readers fixate on a word and the probability of skipping the word.  Recently, other variables, like age-of-acquisition and plausibility, have also been demonstrated to influence eye movements. In this talk, I will review our initial investigation of plausibility effects and also discuss more recent studies we have completed dealing with how plausibility influences eye movements. Parallels will be drawn to research on word frequency effects and also garden path effects in sentence parsing.   Implications of the research findings for models of eye movement control and on-line sentence processing will be discussed.


Speaking vs. signing:  How biology affects the neurocognitive processes for language production

Karen Emmorey Professor, Speech, Language, and Hearing Sciences
San Diego State University

+ more

Sign languages provide crucial insights into what aspects of language processing are affected by the perceptual systems engaged for comprehension (vision vs. audition) and by the motor systems used for production (the hands vs. the vocal track).  In this talk, I will discuss whether and how the different biological properties of sign and speech impact the neural systems that support language production.  In addition, I will present a set of experiments that explore the distinct properties of the perception-production interface for signing compared to speaking.  These experiments explore whether visual feedback plays the same role as auditory feedback during language production.


How the perceptual system adjusts to speaker variability

Tanya Kraljic

+ more

Perceptual theories must explain how perceivers extract meaningful
information from a continuously variable physical signal. In the case of
speech, the puzzle is that little reliable acoustic invariance seems to
exist. In the experiments I will present, I tested the hypothesis that
speech-perception processes recover invariants not about the signal, but
rather about the source that produced the signal. Findings from two
manipulations suggest that the system learns those properties of speech
that result from idiosyncratic characteristics of the speaker; the same
properties are not learned when they can be attributed to incidental
factors. The question then becomes: How might the system distinguish these
properties? The experiments suggest that in the absence of other
information about the speaker, the system relies on episodic order: Those
properties present during early experience are represented. This
"first-impressions" bias can be overridden, however, when additional
information provided to the system suggests that the variation is an
incidental consequence of a temporary state (a pen in the speaker's
mouth), rather than characteristic of the speaker.


Children's interpretation of third person present -s as a cue to tense

Tim Beyer

+ more

While comprehension generally precedes production in development, this may not be true for 3rd person present –s. Studies have shown that even 5- and 6-year old children do not yet understand all the meanings encoded by –s (de Villiers & Johnson, 2007; Johnson, de Villiers, & Seymour, 2005; Keeney & Wolfe, 1972). Here, we examine whether 6- and 7-year old Standard American English speaking children comprehend the temporal information encoded by –s, as compared to lexical items and past tense –ed. Experiment 1 assessed off-line performance and found that all children successfully interpreted the lexical items and –ed, but only the 7-year olds successfully interpreted –s. Eye-tracking measures in Experiment 2 confirmed these results and revealed that the 6-year olds are also sensitive to –s as a cue to tense, but it may not be a strong cue at this age. We argue that the relatively late acquisition of –s is due to characteristics specific to –s that makes its meaning less transparent than other tense morphemes, such as –ed.


Gesture as input in language acquisition

Whitney Goodrich University of California, Berkeley

+ more

In every culture of the world, whenever you hear people speaking, you see people moving their hands. These co-speech gestures are not random, but contain meaningful information. My research explores the extent to which listeners are sensitive to the information conveyed in gesture, and whether this can be a source of input for young children acquiring language. I will be discussing research demonstrating that both children and adults rely on gesture to inform their interpretation of novel verbs and ambiguous pronouns, and discuss how gesture may help children learn to understand anaphora.


The Deictic Urge

Kensy Cooperrider

+ more

Pointing is a common accompaniment to speech. Yet researchers in the cognitive sciences have been much more interested in the pointing behaviors of orangutans and infants than in how pointing is used in fully adult, fully human discourse. As a result, we know very little about how, when, and why adult speakers point in face-to-face interaction. In this talk I will: 1) discuss the results of a recent armchair ethnographic exercise in which we analyzed 45 pointing gestures from a 1990 interview between Michael Jordan and Arsenio Hall; 2) introduce the idea that pointing reflects a deictic urge-- that is, a human urge to anchor the entities of discourse in real or conceptual space as we speak; and 3) describe a series of observational studies I am conducting to investigate the how, when, and why of pointing gestures. I argue that, in addition to being a phenomenon of central interest to gesture studies, pointing provides a crucial window into the role of spatial thinking for speaking.


The Spatiotemporal Neural Dynamics of Word Knowledge in Infants

Katie Travis

+ more

The learning of words is one of the first and most important tasks confronting the young child. By 12 months of age, children are already capable of learning words and have also started speaking. Decades of behavioral research have provided important insights into how infants learn their first words. Yet, as important as this process is, we know virtually nothing about the neural mechanisms that make early word learning possible. Thus, in order to better understand how the infant mind acquires language, it will be important to determine when and where word learning occurs in the infant brain.

Taking a neurobiological perspective of language development, I propose to study neural activity related to language processes in infants by combining non-invasive brain imaging technologies such as magnetoencephalography (MEG) and structural magnetic resonance imaging (MRI). In this way, I will be able to obtain both functional/temporal (MEG) and anatomical (MRI) information about when and where activity related to language processing occurs in the developing brain. Combining these techniques will also help me to overcome some of the limitations of the individual technologies and the inherent difficulties of imaging infants. For this talk, I will be discussing how I have adapted MEG and MRI techniques to study neural processes related to early language development in young infants. Specifically, I will be describing preliminary results from an initial study aimed at investigating the spatial and temporal dynamics of semantic knowledge in infants ages 12-15 months.


Verb Argument Structure In The Language Of Latino Preschoolers With And Without Language Impairment: Preliminary Findings

Gabriela Simon-Cereijido

+ more

Previous research has indicated that English-speaking and Spanish-speaking children with language impairment (LI) have difficulties with verbs of increased number of arguments in spontaneous language. The purpose of my project is to evaluate the role of verb argument structure (VAS) in the language of Latino Spanish-speaking preschoolers with and without LI who are English Language Learners. The specific goals of this study are to examine whether: 1) whether children with LI have more omissions of verbs and arguments than age- and language-matched controls in a Spanish picture description task and 2) whether children with LI are less accurate with ditransitive verbs than with transitive and intransitive verbs. If children with LI omit more verbs and arguments than language-matched controls, this may indicate that VAS deficits are specific to LI and not developmental. In addition, if children with LI have more errors with ditransitive verbs than with the other verbs, this may suggest that processing capacity limitations hinder their production of predicates with more arguments due to increased processing load. Alternatively, if their omission rates are not more pronounced with predicates with more arguments, this may point to limitations in the overall verb system. Ultimately, this project's findings will inform clinical issues related to assessment and intervention of Latino ELLs, a growing segment of the pediatric population.


Surprisal as optimal processing

Nathaniel Smith

+ more

I present a theoretical and empirical investigation of the precise role that probability in context, P(word|context), plays in reading. It is well known that words which are more predictable are also read more quickly (e.g. Ehrlich and Rayner, 1981). Not yet known, however, is the precise functional form of this relation; authors have suggested that probability's contribution is logarithmic, linear, or even reciprocal, while empirical work has made only factorial comparisons that provide limited insight into curve shape. There is also as yet no consensus on why these effects occur, or take whatever form they do. We address these issues by (a) presenting a simple theoretical model of reading time which explains its sensitivity to probability as arising within an optimal processing framework, and which strongly predicts a logarithmic relation between probability and time; and, (b) giving supporting evidence from an empirical study of reading times.


Toward a Discourse Model of Ellipsis

Laura Kertz

+ more

I present results from a series of magnitude estimation experiments which demonstrate the effect of information structure on ellipsis acceptability.  I show how these results reconcile apparently contradictory claims in the literature, where information structure has previously been confounded with other levels of representation, including syntax and discourse coherence. I also discuss the implications of these findings for recent processing-based models which link ellipsis acceptability to the specific task of antecedent reconstruction, and compare the predictions of those models to a more general discourse processing approach.


Modeling uncertainty about the input in online sentence comprehension

Roger Levy

+ more

Nearly every aspect of language processing is evidential---that is, it requires informed yet uncertain judgment on the part of the processor. To the extent that language processing is probabilistic, this means that a rational processing strategy could in principle attend to information from disparate sources (lexical, syntactic, discourse context, background world knowledge, visual environment) to optimize rapid belief formation---and there is evidence that information from many of these sources is indeed brought to bear in incremental sentence comprehension (e.g., MacDonald, 1993; Frazier & Rayner, 1982; Rohde et al., 2008; McRae et al., 2005; Tanenhaus et al., 1995). Nevertheless, nearly all formalized models of online sentence comprehension implicitly contain an important interface constraint that limits the use of cross-source information in belief formation: namely, the "input" to the sentence processor consists of a sequence of words, whereas a more natural representation would be something like the output of a word-recognition model---a probability distribution over word sequences.  In this talk, I examine how online sentence comprehension might be formalized if this constraint is relaxed.  I show how generative probabilistic grammars can be a unifying framework for representing both this type of uncertain input and the probabilistic grammatical information constituting a comprehender's knowledge of their own language.  The outcome of the comprehension process is then simply the intersection of a probabilistic input with a probabilistic grammar.  I then show how this model may shed light on two outstanding puzzles in the sentence comprehension literature: (i) data underlying the "good enough representation" approach of (F.) Ferreira et al. (2003), such as (1) below:

While Anna dressed the baby spit up in the bed.

where "the baby" is taken by many readers to be both the theme of "dressed" and the agent of "spit up", and (ii) the local-coherence effects of Tabor et al. (2004), in which sentences such as (2) below:
The coach smiled at the player tossed the frisbee.

elicit what are apparently classic garden-path effects despite the fact that global context seemingly should rule out the garden path before it is every pursued.


The Activation of Verbs in Sentences involving Verb Phrase Anaphors

Sarah Callahan

+ more

Numerous psycholinguistic studies have focused on the processing of noun phrase (NP) anaphors (e.g. pronouns). This research has suggested that the presentation of a noun activates its lexical representation, that this activation declines rapidly over the next 700-1000ms and, critically, that the presentation of a co-referential anaphor immediately re-activates this representation (c.f. Nicol & Swinney, 2002). In contrast, comparatively few studies have investigated verb phrase (VP) anaphors, so although it is clear that the presentation of a verb activates its lexical representation (including meaning, argument structures, and thematic roles (e.g. Ferretti, McRae, & Hatherell, 2001; Shapiro, Zurif, & Grimshaw, 1987)), little is known about the duration of this activation and any (re-)activation at a corresponding anaphor.

The current study comprises two experiments using cross-modal lexical priming to investigate the activation of a verb throughout two conjoined sentences involving a VP anaphor (e.g. did too). The results indicated that activation related to the initial presentation of the verb was undetectable by a point approximately 1500ms following presentation. This finding fits with evidence from nouns that activation related to the initial presentation decays relatively quickly; on the other hand, contrary to typical findings for nouns, the verb was active at all points tested in the second sentence rather than just at the corresponding anaphor. Based on the points tested, this pattern of results suggests the verb was reactivated following the conjunction (i.e. and) and that this activation was maintained throughout the second sentence at least until a point immediately following the anaphor. Overall, these findings suggest important differences in the activation of verbs and nouns during sentence processing and highlight the need for further work on this issue.


Emergent Conceptual Hierarchies and the Dynamics of Similarity

Ken McRae University of Western Ontario

+ more

People's knowledge of concrete nouns usually is viewed as hierarchical. The goal of the present research is to show that behavior that appears to implicate a hierarchical model can be simulated using a flat attractor network. The network learned to map wordforms for basic-level concepts to their semantic features. For superordinate concept learning, wordforms were paired equally often with one of its exemplar's representations so that typicality was not built into the training regime, and the network developed superordinate representations based on experience with exemplars. We established the basic validity of the model by showing that it predicts typicality ratings. Previous experiments have shown roughly equal superordinate-exemplar priming (fruit priming cherry) for high, medium, and low typicality exemplars. Paradoxically, other studies and attractor network simulations show that basic-level concepts must be highly similar to one another to support priming. We conducted an experiment and simulation in which priming was virtually identical for high and medium/low typicality items. In the model, unlike features of basic-level concepts, superordinate features are partially activated from a wordform due to a superordinate's one-to- many mapping. Thus, it is easy for a network to move from a superordinate representation to the representation of one of its exemplars, resulting in equivalent priming effects regardless of typicality. This research shows that a flat attractor network produces emergent behavior that accounts for human results that have previously been viewed as requiring a hierarchical representational structure, and provides insight into temporal aspects of the influences of similarity.

Meaning, structure, and events in the world

Mary Hare Bowling Green State University

+ more

Fundamental issues in the representation and processing of language have to do with the interface among lexical, conceptual, and syntactic structure. Meaning and structure are related, and one view of this relationship is that lexical meaning determines structure. In this talk I will argue that the relevant generalizations are not based on lexical knowledge, but on the language user's interpretation of generalized events in the world. A set of priming studies will demonstrate that nouns denoting salient elements of events prime event participants. In addition, corpus analyses and self-paced reading studies will show that difference senses of a verb reflect variations on the types of event that the verb refers to, and that this knowledge leads to expectations about subsequent arguments or structure during sentence comprehension.


Routine Validation of Explicit and Implicit Ideas in Reading

Murray Singer

+ more

There is extensive evidence that understanding a sequence as simple as Dorothy poured the water on the bonfire. The fire went out requires the inference that the first event caused the second. Here, it is further proposed that the elements of this sequence must be validated against antecedent text ideas or relevant world knowledge before the inference is accepted by the reader. Otherwise, it would appear neither more nor less coherent than Dorothy poured the water on the bonfire. The fire grew hotter.

Evidence is presented that readers engage in such validation processes in the construction of inferences derived from narrative and expository text, and even for explicitly stated text ideas. These findings are interpreted with reference to a constructionist analysis of discourse comprehension, two assumptions of which are that readers (a) maintain coherence at multiple levels of text representation and (b) try to explain why actions, events, and states are mentioned in the message.


The Neural Correlates of Figurative Expressions

Dieter Hillert

+ more

The linguistic design of the human language system is typically based on assumptions about the compositional structure of literal language. However, it has been estimated that for instance in American English people use at least 25,000 idiomatic-like expressions. The talk will therefore focus on the cognitive and neural correlates of figurative language comprehension. An account of the human language system is suggested that divides between a left-sided core language system and a bilateral pragmatic language network. Comprehension of idiomatic expressions that involve alternative parsing strategies correlates with an increase of cognitive costs compared to comprehension of non-figurative default sentence structures. The costs associated with idiom processing seem to be compatible with those related to resolving syntactic ambiguities or reconstructing canonical sentence structures. Moreover, while ambiguous idioms seem to engage as any other kind of standing ambiguity the left superior and medial frontal region to induce search processes through conceptual space, opaque idioms seem to be parsed and rehearsed in Broca's region. By contrast, comprehension of canonical and unambiguous sentences appears to evoke exclusively the left superior and middle temporal cortex. It is concluded that immediate linguistic computations are functionally organized in a modular fashion, but their neural correlates are shared by different cognitive domains.


Daniel Casasanto Stanford University

+ more

How do people transform experience into knowledge? This talk reviews a series of studies testing the hypothesis that our physical experiences in perception and motor action contribute to the construction of even our most abstract thoughts (e.g., thoughts about value, time, happiness, etc.) Further, these studies begin to distinguish the contributions of linguistic experience, cultural experience, and perceptuo-motor experience to the formation of concepts and word meanings. Some experiments show that people who talk differently think differently; others show influences of non-linguistic cultural practices on conceptual structure; others show that people with different bodies, who interact with their environments in systematically different ways, form dramatically different abstract concepts. These demonstrations of linguistic relativity, cultural relativity, and what I will call 'bodily relativity' highlight the diversity of the human conceptual repertoire, but also point to universals in the processes of concept formation.


Interactions between word- and sound-based processes in multilingual speech production

Matt Goldrick

+ more

Interactive effects--where processing at one level is modulated by information encoded at another level--have been the focus of a great deal of controversy in psycholinguistic theories. I'll discuss new evidence from my laboratory examining interactions between word- and sound-level processes in multilingual speech production. These results demonstrate that whole-word properties (cognate status, lexicality) influence the processing of sound structure at both a categorical, segmental level as well at gradient, phonetic levels.


Investigating situated sentence comprehension: evidence from event-related potentialsn

Pia Knoeferle

+ more


Cross-linguistic investigation of determiner production

Xavier Alario

+ more

Language production is generally viewed as a process in which conceptual semantic messages are transformed into linguistic information. Such a description is probably appropriate for some aspects of the process (e.g. noun production), yet it is clearly incomplete.

Consider for instance the fact that in numerous languages determiner forms depend not only on semantic information but also on several other kinds of information. In Germanic, Slavic, and Romance languages, the retrieval of the determiners (and other closed- class words, such as pronouns) also depends on a property of the nouns called “grammatical gender.” For instance, in Dutch, nouns belong to the so-called “neuter” gender or to the “common” gender. The definite determiners accompanying the nouns belonging to the two sets are respectively het (e.g. het huis, ‘the house’) and de (e.g. de appel, ‘the apple’). In English, consonant-initial nouns and vowel-initial noun can require different indefinite article forms (e.g. a pear vs. an apple).

Such properties of determiners surely impose constraints on how these lexical items can be retrieved. For this very reason, determiners provide a broad testing ground for contrasting psycholinguistic hypothesis of lexical processing and grammatical encoding. In my talk, I will review the cross-linguistic research I have been conducting on determiner retrieval. One important question that will be asked, and only tentatively answered, concerns the extent to which open-class words such as nouns and closed-class words such as determiners are processed and selected by similar mechanisms.


The development of word recognition: a cognitive control problem?

Sarah Creel

+ more


Meaning & Motor Action: The role of motor experience in concept formation

Daniel Casasanto Stanford University, Department of Psychology

+ more

How do people transform experience into knowledge? This talk reviews a series of studies testing the hypothesis that our physical experiences in perception and motor action contribute to the construction of even our most abstract thoughts ( e.g., thoughts about value, time, happiness, etc.) Further, these studies begin to distinguish the contributions of linguistic experience, cultural experience, and perceptuo-motor experience to the formation of concepts and word meanings. Some experiments show that people who talk differently think differently; others show influences of non-linguistic cultural practices on conceptual structure; others show that people with different bodies, who interact with their environments in systematically different ways, form dramatically different abstract concepts. These demonstrations of linguistic relativity, cultural relativity, and what I will call 'bodily relativity' highlight the diversity of the human conceptual repertoire, but also point to universals in the processes of concept formation.


Sign Language Surprises?

Susan Fisher

+ more

Until quite recently, most research on sign languages has been on those sign languages based originally in Europe, such as differences between Asian and Western sign languages in syntax, the use of prosody to convey syntactic distinctions, and especially word information. If time permits, I shall then return to the proposed commonalities and speculate on why they don’t seem to extend to so-called “village” sign languages.


On words and dinosaur bones: Where is meaning?

Jeff Elman UC San Diego

+ more

Virtually all theories of linguistics and of language processing assume the language users possess a mental dictionary - the mental lexicon - in which is stored critical knowledge of words. In recent years, the information that is assumed to be packed into the lexicon has grown significantly. The role of context in modulating the interpretation of words has also become increasingly apparent. Indeed, there exists now an embarrassment of riches which threatens the representational capacity of the lexicon.

In this talk I will review some of these results, including recent experimental work from adult psycholinguistics and child language acquisition, and suggest that the concept of a lexicon may be stretched to the point where it is useful to consider alternative ways of capturing the knowledge that language users have of words.

Following an idea suggested by Dave Rumelhart in the late 1970s, I will propose that rather than thinking of words as static representations that are subject to mental processing-operands, in other words-they might be better understood as operators, entities that operate directly on mental states in what can be formally understood as a dynamical system. These effects are lawful and predictable, and it is these regularities that we intuitively take as evidence of word knowledge. This shift from words as operands to words as operators offers insights into a number of phenomena that I will discuss at the end of the talk.


Pia Knoeferle

+ more


Klinton Bicknell

+ more


Adam Tierney

+ more


Sarah Callahan

+ more


Leah Fabiano

+ more


Bob Slevc

+ more


Arielle Borovsky

+ more


Kim Plunkett

+ more


Vic Ferreira

+ more


Leah Fabiano

+ more


Michael Ramscar

+ more


Zenzi Griffin

+ more


Henry Beecher

+ more


Robert Kluender

+ more


Hannah Rohde

+ more


Learning and Liking a New Musical System/p>

Psyche Loui

+ more

One of the intriguing characteristics of human cognition is its tendency to make use of relationships between sounds. Sensitivity to sound patterns is especially important for the perception of language and music. While experiments on language have made use of many natural languages as well as some artificial languages, experiments investigating the learning of music to date have mostly relied on sounds which adhere to principles of Western music.

I will present several studies that investigate the learning of a novel system of musical sounds. The system is based on the Bohlen-Pierce scale, a microtonal system tuned differently from the traditional Western scale. Chord progressions and melodies were composed in this scale as legal exemplars of two sets of grammatical rules. Participants listened to melodies in one of the two grammars, and completed learning-assessment tests which include forced-choice recognition and generalization, pre- and post-exposure probe tone ratings, and subjective preference ratings. When given exposure to a small number of melodies, listeners recognized and preferred melodies they had heard, but when exposed to a sufficiently large set of melodies, listeners were able to generalize their recognition to previously-unencountered instances of the familiar grammar.

Event-Related Potentials in response to infrequent chords in the new musical system revealed a frontal Early Anterior Negativity (EAN) at 150-210ms, followed by a prefrontal Late Negativity (LN) at 400-600ms. These effects increased over the course of the experiment, and were dictated by the relative probability of the chords. Findings in the new musical system parallel those obtained in Western music and also predict individual differences in behavioral tests. We conclude that musical experience recruits a flexible set of neural mechanisms that can rapidly integrate sensory inputs into novel contexts. This rapid integration relies on statistical probabilities of sounds, and may be an important cognitive mechanism underlying music and language.


Multiple logistic regression and mixed models/p>

Roger Levy & Florian Jaege

+ more

Multiple regression models are a generalization of ANOVAs. Modern variants of regression models (so-called mixed models) come with a number of advantages, such as scalability, increase in power, and less dependency on balanced designs. For example, standard ANOVAs require balanced designs, which often leads to very unnatural distributions of stimuli types within an experiment. Modern regression models can to some extent free researchers from these restrictions. Multiple regressions easily afford the inclusion of different kinds of independent variables (such as categorical and continuous variables) in the same analysis. In contrast to ANOVAs, relative effect sizes and directions of effects are directly evident from multiple regression outputs.

We give an introduction into multiple regression and mixed models (in the software package R). We use real psycholinguistic data samples and show step by step how the analysis are performed. The experimental data we use have categorical independent variables (such as priming data, answer accuracym multiple choice, etc.). Data of this kind is usually analyzed using ANOVAs over percentages. This is problematic for a couple reasons that we will discuss. We discuss the pros and cons of an alternative analysis, called logistic regression. Traditional logistic regression does not allow for the modeling of random subject or item effects. We show how modern statistical methods, such as logit mixed models or bootstrapping over subject and items address this challenge. In the course of this, we also go through some tools and visualization in R that we find particularly useful.


Implicit learning of probabilistic distributions of structural events: Evidence from syntactic priming

Neil Snider & Florian Jaeger

+ more

Language users employ structural probabilities when processing utterances (see Pickering & van Gompel, 2006 for an overview). For example, the probability of a specific argument frame given a verb (henceforth VERB BIAS) affects comprehension and production. So language users' knowledge about verbs includes information about their biases (see also Stallings et al., 1998). This raises the question whether VERB BIASES are acquired once (e.g. during a critical period) or whether speakers keep learning VERB BIASES. We argue that a phenomenon known as syntactic priming yields evidence for the latter.

Syntactic priming (e.g. Bock, 1986) refers to the tendency of speakers to repeat abstract syntactic patterns. Consider the ditransitive alternation:

(1a ) We could give [physicals] [to the rest of the family members].
(1b) We could give [the rest of the family members] [physicals].

Speakers are more likely to choose the NPNP construction if they has been an NPNP construction in the preceding discourse (and, mutatis mutandis, for NPPP). Such syntactic priming has been attributed to implicit learning (Bock & Griffin, 2000, 2006; Ferreira, in progress). Implicit learning predicts that less frequent (and hence more surprising) events lead to more activation (and hence more learning). So, if speakers keep track of VERB BIAS, and if priming effects are in part due to this implicit learning, priming strength (i.e. the increase in likelihood that a prime and target have identical structures) should be inversely correlated with VERB BIAS.

Study 1 is a meta-analysis of five ditransitive priming experiments (Bock and Griffin, 2000 and Bock & Griffin, 2006). After exclusion of incomplete trials, the data consist of 8,212 prime-target trials. We find that the prime's VERB BIAS is inversely correlated with its priming strength. This effect is highly significant (p < .001) even after accounting for all factors from Bock & Griffin's (2000, 2006), as well as additional controls.

Study 2 replicates the effect for spontaneous speech. We use a database of 2,300 ditransitives extracted by Bresnan et al (2004; also Recchia et al., 2006) from the full Switchboard corpus (LDC, 1993). We find the predicted inverse effect of the prime's VERB BIAS to be highly significant (p < .001), even after controlling for other factors influencing the choice between NPPP and NPNP (Bresnan et al., 2004).

We conclude that priming strength is inversely related to the surprisal associated with the pime's structure (given the prime's VERB BIAS). Only implicit learning accounts of syntactic priming (Bock & Griffin, 2000) predict this relation. Our results also argue that speakers continuously 'keep track' of probabilistic distributions even of such fine-grained events as the conditional probability of a syntactic construction given the verb (VERB BIAS)


All pronouns are not created equal: The processing and interpretation of null and overt pronouns in Spanish

Sarah Callahan

+ more

This study investigated the interpretation and processing of null and overt subject pronouns in Spanish. Participants completed an antecedent identification questionnaire and a word-by-word self-paced reading task. Both presented sentence pairs, the second of which contained an embedded clause. The number of possible referents was varied along with the form of the embedded subject pronoun. In the off-line questionnaire, the number and relative prominence of possible referents affected final interpretation, but the form of the pronoun had no effect. In contrast, in the on-line task, clauses with overt pronouns were read more slowly than those with null pronouns regardless of the number of possible referents. The analyses revealed that this effect was not immediate, but rather occurred later in the clause. Implications for models of the processing of co-reference are discussed.


How do language users estimate probabilities?

Florian Jaeger & Roger Levy

+ more

There is considerable evidence that comprehension and production are (in part) probabilistic (Aylett & Turk, 2004; Gahl & Garnsey, 2004; Garnsey et al., 1997; Jurafsky et al., 2001; Staub & Clifton, 2006; Wasow et al, 2005). Little, however, is understood about the how and what of probabilistic language processing. In particular: (A) What type of information do language users consider when estimating probabilities? (Mitchell et al., 1995) (B) How local does this information have to be? (C) And, how fine-grained are the probabilistic events/units language users keep track of? We address these questions using corpus-based evidence from that-omission in non-subject-extracted relative clauses (NSRCs), where that is less likely for predictable NSRCs (Jaeger, 2006):

(1) [NP1 the words [PP to [NP songs [NSRC (that) she's listening to]]]

We introduce a two-step modeling approach to distinguish different theories of probability estimation. In the first step, we derive estimates of NSRC predictability based on different assumptions about how speakers track NSRC predictability. In the second step, we compare these different estimates with regard to how much of that-omission they account for (in a logistic regression model including other controls taken from Jaeger, 2006).

We find that speakers are sensitive to the predictability of fine-grained linguistic units, and they estimate predictability using detailed structural cues of the utterance. These cues don't have to be adjacent to the target event, but a lot of the information relevant to the estimation of probabilities seems to be relatively local to the target.

The results are further evidence for probabilistic syntactic production (Jaeger, 2006; also Stallings et al., 1998). They are also are modest steps towards a better understanding of probabilistic language processing. We present an interpretation of the data in form of Uniform Information Density (Levy & Jaeger, to appear): if speakers want to optimize their chance to be understood (while conserving effort), speakers should structure their utterances so as to avoid peaks and troughs in information density. We present preliminary evidence in favor of this view.


Perseveration in English Comparative Production

Jeremy Boyd

+ more

An implicit assumption in many studies that make use of the elicited production methodology is that subjects’ responses reflect their true linguistic competence. The current work challenges this premise by looking at data on English comparative adjective acquisition that were collected using elicited production. We are interested in sequences like the following:

Trial Production
t-2 faster
t-1 older
t dangerouser

The specific question we ask is whether the child’s production of dangerouser on trial t reflects how the child actually thinks dangerous should be inflected for the comparative, or whether dangerouser is a perseveration of the -er pattern of inflection from trials t-1 and t-2. If the latter is true, then it would be infelicitous to conclude—as other researchers have (Graziano-King & Cairns, 2005; Gathercole, 1985)—that overuse of the -er pattern is evidence that children entertain an abstract ADJer mental representation.

We present evidence that bears on this issue from two sources.

First, we experimentally manipulated the temporal structure of the production task such that some subjects received back-to-back production trials, while others received production trials interspersed with trials in which a simple counting task was performed. If errors like dangerouser do result from perseveration, then their likelihood should be reduced when counting trials, which dramatically slow the pace of the task, are included. Second, we calculated a measure known as perseveration probability (Cohen & Dehaene, 1998) across all of the trials that our subjects participated in. This allows us to perform a number of analyses comparing perseveration probabilities across ages, experimental groups, and inflectional patterns. Preliminary results from these two sources of evidence will be discussed at the talk.

The question of whether the elicited production method is subject to perseveration effects is an important one. Our theories of competence, processing, and development are informed by the data that we collect. That these data may change according to how the method is applied suggests that our theories may also have to be adjusted accordingly. Additionally, at the clinical level, perseveration effects may cause some children to fail diagnostic tests of grammar and be labeled as language-impaired when they are, in fact, perfectly normal. Discovering how verbal perseveration works could be helpful in that it may pave the way for the construction of more effective diagnostic tools, which should result in fewer wasted resources.


Roger Levy

+ more

Any theory of human syntactic processing must account for several crucial properties: our ability to effortlessly disambiguate highly ambiguous linguistic input; our ability to make inferences on the basis of incomplete inputs; and the fact that some parts of some sentences are more difficult for us to process than others. In psycholinguistics, the historically preeminent accounts of this last property have appealed primarily to resource limitations (e.g., Clifton & Frazier 1989, Gibson 1998): as a structural representation of the input is incrementally built, having to keep more partial structures in memory for a longer time is penalized. In this talk, however, I argue that an alternative, expectation-based account of syntactic processing -- where a comprehender's ability to predict an upcoming word is the chief determinant of the processing difficulty for that word -- is gaining support in a growing body of experimental results in the online processing of verb-final languages (e.g., Konieczny 2000, Vasishth 2002, Konieczny and Döring 2003) that is proving problematic for resource-limitation theories. I present a new information-theoretic derivation of the surprisal model of processing difficulty originally proposed by Hale (2001) that draws a close connection between the ideas of expectation and incremental disambiguation. I show that the surprisal model accounts for a variety of recent results in syntactic processing, including online processing of clause-final verbs (Konieczny 2000, Konieczny and Döring 2003) and V2 verbs (Schlesewsky et al. 2000) in German, subject-modifying relative clauses in English (Jaeger et al. 2005), and conditions under which syntactic ambiguity can facilitate comprehension (Traxler et al. 1998, van Gompel et al. 2001, 2005).


Phonological representation in bilingual Spanish-English speaking children

Leah Fabiano

+ more

Paradis (2001) proposed the Interactional Dual Systems Model of bilingual phonological representation which posits separate, but non-autonomous systems of representation in bilingual children. This study attempted to provide evidence for interaction between the two systems of bilingual representation through the measurement of (1) accuracy of phonemes shared by English and Spanish and the accuracy of those phonemes specific to either language, (2) the predictive capability of frequency of occurrence of sounds in each language, a markedness variable, and (3) the amount and type of phonological cross-linguistic effects present in the speech of bilingual children. The main hypothesis of this study is if these interactive characteristics are observed in the speech of bilingual children, they may provide evidence for non-autonomy between the childÿÿs two phonological systems. Twenty-four typically-developing children, ages 3;0 to 4;0 were included in this study: Eight bilingual Spanish-English speaking children; eight monolingual Spanish speakers, and eight monolingual English speakers. Single word and connected speech samples were obtained for each child in each language. The first step in this series of analyses was to obtain descriptive information for each subject. Paired samples t-tests and Friedman tests were used to examine shared versus unshared phoneme accuracy. A One-Way ANOVA and a post hoc Tukey examining PCC by subject was performed in order to determine that the data could be collapsed. Correlations by subject, on PCC versus frequency were performed in order to determine the direction, p value, and the strength of the relationship. A Mixed Effects Regression analysis was then performed to determine if frequency was a significant predictor of shared PCC. Substitution errors of both the bilingual and monolingual speakers were examined to provide evidence for cross-linguistic effects. Results yielded that for bilingual speakers phoneme accuracy for shared elements was significantly higher than that of unshared elements, frequency did not demonstrate predictive capability on high phoneme accuracy, and cross-linguistic effects were evident in the English and Spanish productions of bilingual children, thus providing support for the IDSM.


Language comprehension and processing in speakers of different varieties of English

Tim Beyer

+ more

Although African American English (AAE) and Standard American English (SAE), the standard variety of English in the US, share many phonological forms, the grammars can differ substantially. For example, SAE 3rd person singular present 's', future contracted 'll', and past allomorphs 't/d' do not regularly appear in the surface form of AAE. This, among other evidence, suggests that while these morphemes carry tense information in SAE, they may not in AAE. An important question therefore becomes how AAE-speakers interpret SAE tense and aspect morphology. Using off- and on-line (eye-tracking) measures, this project investigates how 1st and 2nd grade AAE- and SAE-speakers interpret SAE tense and aspect morphology. Results show global comprehension patterns that accord with differences in the morphological systems of the childrenÿÿs native varieties and suggest that 1st and 2nd grade children are capable of rapidly integrating temporal information, but only when it is part of their native language variety.


Understanding Words in Context: What role for Left Inferior Prefrontal Cortex?

Eileen Cardillo

+ more

The ability to use contextual information to aide word recognition is a ubiquitous aspect of normal speech comprehension. However, evidence from semantic priming tasks suggest that this capacity breaks down differentially with certain forms of aphasia and/or left-hemisphere damage. In particular, it has been suggested that aphasic patients with damage to left inferior frontal areas may be particularly impaired in the bottom-up activation of word meanings on the basis of semantic context and those with lesions affecting left posterior-temporal areas may be especially impaired in more controlled aspects of lexical processing. I recently explored this hypothesis, and its alternatives, using an auditory sentence-priming task with 20 left-hemisphere damaged patients with a range of comprehension difficulty. I will present a preliminary analysis of their performance in this task as well as results from a Voxel-Based Lesion Symptom Mapping (VLSM) analysis of their behavior.


Shannon Rodrigue

+ more


Some Generalizations About Linguistic Generalization By Infants

LouAnn Gerken

+ more

One dimension on which more vs. less strongly constrained models of language acquisition vary is the amount of evidence required for a particular linguistic generalization. "Triggering" models require, in the limit, only a single datum to set an innate parameter, whereas less constrained models often arrive at a generalization by performing statistics over many exemplars from an input set. I will present data from research with 9- to 17-month-old infants 4-year-old children, which explores the amount and type of input required for learners to generalize beyond the stimuli encountered in a brief laboratory exposure. All of the studies suggest that generalization requires a minimal number of data points, but more than just one, and that different subsets of the input lead to different generalizations. Taken together, the data provide direction for examining the ways in which innate constraints and learning via statistics may combine in human language development.


Elizabeth Redcay

+ more

The second year of life is a time of dramatic cognitive and social change. One of the most striking advances is in a child's language development. A typical 8 month old infant understands only a few words, the average 16 month old understands over 100 words, and the typical 24 month old understands many hundreds of words. The anatomical substrate for language processing during this time of rapid word learning remains unclear as there have been no functional magnetic resonance imaging (fMRI) studies of healthy, typically developing toddlers. The second year of life is also characterized by a marked absence of language growth in children with autism. Autism emerges during the first few years of life and is diagnosed in part by deficits in language and communication. Structural evidence shows brain differences from controls are greatest during this age.

However, no functional MRI data exist from young children with autism. In this talk, I will present an fMRI study examining passive speech comprehension in 10 typically developing toddlers (meanSD; 214 mo) and 10 typically developing older children (393 mo) during natural sleep. Our results from this study suggest that rapid language acquisition during the second year of life is not accounted for by classical superior temporal language areas alone, but instead appears to result from the utilization of frontal cortical functions as well as other brain regions.

Additionally, I will present preliminary fMRI data from young 2-3 year old children with autism who were presented with this same speech paradigm during natural sleep.


Long Term Activation of Lexical and Sublexical Representations

Gedeon Deák Cognitive Science and Human Development

+ more

It is commonly believed that young children are precocious word-learners. It is less clear what this belief entails. Are children very good at learning new words? Compared to whom? Compared to what other type of information? If word-learning is specialized, how does it get that way? These and other questions began inconveniencing people (especially those who see language as a mystical ability) about 10 years ago.

The common view that children have special (fast) word-learning processes has only three problems: lack of evidence, disconfirming evidence, and faulty underlying logic. Other than that, it is difficult to disprove. Nevertheless, my students and I began several experiments to isolate what, if anything, is specialized about children’s word learning. We ran several experiments showing that the “mutual exclusivity” bias (i.e., the apparent tendency for children to reject a new word for something they can already name; Markman, 1994) is in fact a weak, transitory “fan effect” (Anderson, 1972) that is not specific to novel words. In the process, we unexpectedly found that 4- and 5-year-old children are actually slower to learn new words than new facts (even if novel words are embedded in the fact) or new pictorial symbols. This finding caused teeth-gnashing and hair-rending in reviewers. To ease their suffering, we started another experiment to try to replicate this “slow mapping” effect in young children. Preliminary results suggest that 3-year-olds are no faster, and perhaps a little slower, to learn pictograms than words. Four-year-olds show no difference. Both 3- and 4-year-olds learn new facts faster than new words, even though facts are more complex, and factors such as exposure, novelty, and phonological difficulty are precisely controlled (or disadvantageous for facts). The fact-advantage is seen in immediate and delayed (one week) memory tests. A related claim that children make more systematic generalizations from new words (Waxman & Booth, 2000, 2001; Behrend et al, 2001) was not confirmed.

I will describe these studies in more detail, discuss the implications of the results, and solicit feedback on ongoing or planned follow-up studies.


Long Term Activation of Lexical and Sublexical Representations

Arthur Samuel (work done in collaboration with Meghan Sumner)

+ more

When a listener hears a word like "tape", current theories of spoken word recognition assert that recognition involves the activation of both lexical ("tape") and sublexical (e.g., /t/, /e/, /p/) representations. In contrast, when an unfamiliar utterance ("dape") is heard, no lexical representations can be settled on. Using a long-term priming paradigm, we examine whether representations remain active for at least 10-20 minutes. We approach this by examining lexical decision times for nonwords (e.g. "dape"), as a function of the words or nonwords heard 10-20 minutes earlier. We find that the time needed to identify a nonword as a nonword is delayed if a similar word was heard 10-20 minutes before; there is no such delay if the nonword itself had previously been heard. Conversely, nonword processing is faster if a similar (but not identical) nonword had been presented previously. The delay caused by prior word exposure suggests that the word's lexical representation remains active, and competes with the nonword during its recognition. This interference is found both for items sharing onsets ("flute-floose") and offsets ("tape-dape"). The equivalence of these two cases supports word recognition models in which a word's lexical neighborhood determines the set of lexical competitors. The enhanced processing of a nonword due to having heard a similar nonword supports the existence of sublexical (e.g., consonant-vowel, and vowel-consonant) units that can retain activation over a surprisingly long time period.


Relationships between processing of meaningful linguistic and nonlinguistic sounds

Arielle Borovsky, Ayse Saygin, & Alycia Cummings

+ more

To what degree is the processing of language special? We present data from a large scale project that examines the behavioral correlates of nonlinguistic and linguistic comprehension in a number of patient populations. We report on data that examines the auditory comprehension of environmental and verbal sounds in a balanced task using the same verbal and nonverbal items. This test has been administered to a number of populations including: neurologically normal children, college students and elderly participants, children and adults with left and right hemisphere focal lesions, and children diagnosed with language impairment. In all cases, we fail to find behavioral dissociations between linguistic and nonlinguistic sound processing. These studies show that language is subserved at least in part by a domain-general system and shares processing and neural resources with other complex and overlearned multi-modal skills.


Prosodic disambiguation of syntactic structure: For the speaker or for the addressee?

Tanya Kraljic

+ more

Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so ''for'' their addressees (the audience design hypothesis) or ''for'' themselves, as a by-product of planning and articulating utterances. Three experiments addressed these issues. In Experiments 1 and 3, speakers followed pictorial guides to spontaneously instruct addressees to move objects. Critical instructions (e.g., ''Put the dog in the basket on the star'') were syntactically ambiguous, and the referential situation supported either one or both interpretations. Speakers reliably produced disambiguating cues to syntactic ambiguity whether the situation was ambiguous or not. However, Experiment 2 suggested that most speakers were not yet aware of whether the situation was ambiguous by the time they began to speak, and so adapting to addressees' particular needs may not have been feasible in Experiment 1. Experiment 3 examined individual speakers' awareness of situational ambiguity and the extent to which they signaled structure, with or without addressees present. Speakers tended to produce prosodic cues to syntactic boundaries regardless of their addressees' needs in particular situations. Such cues did prove helpful to addressees, who correctly interpreted speakers instructions virtually all the time. In fact, even when speakers produced syntactically ambiguous utterances in situations that supported both interpretations, eye-tracking data showed that 40% of the time addressees did not even consider the non-intended objects.


Speakers' control over leaking private information

Liane Wardlow Lane, Michelle Groisman & Victor S. Ferreira

+ more

Past research demonstrates that speakers sometimes make references to privileged objects (objects known only to them) when naming mutually visible objects (Horton & Keysar, 1996; Nadig & Sedivy, 2002; Wardlow & Ferreira, 2003). For example, Wardlow and Ferreira (2003) report a task where speakers and addressees were presented with four cards each depicting a simple object. Both could see the same three objects (i.e., a circle, a square, and a triangle), but the speaker could see an additional, privileged object (a smaller triangle). Speakers were asked to identify one of the mutually visible objects (the target) for the addressee. When asked to identify the triangle, speakers should have said "triangle." However, they often said "large triangle", as if they failed to account for perspective differences. Interestingly, such utterances serve to implicitly leak extra information. Here, "large triangle" conveys that the speaker can also see another, smaller triangle. But can speakers avoid communicating implicit information when doing so conflicts with their goals?

We used a referential communication task like that described above. On test trials, the privileged object was the same as the target object but differed in size, whereas on control trials, the privileged object was distinct. In the baseline block, speakers were simply asked to name a target. In conceal blocks, participants were given additional instructions that encouraged speakers to hide the identity of the foil when identifying the target. Specifically, after addressees selected the target, they could guess the identity of the privileged object. Speakers and addressees kept scores; a correct guess gave addressees an additional point. Thus, speakers were provided with both incentive and instruction to conceal the identity of the privileged object. If speakers can control leaking information, then the conceal instruction should reduce modifier use relative to baseline performance.

Results showed that on test trials, speakers used modifying adjectives more in the conceal condition (14.4%) than in the baseline condition (5.4%). Speakers rarely used modifying adjectives in the control conditions (1.4% and .5%) Thus, the instruction to conceal privileged information made speakers refer to it even more; this is likely because the instruction to conceal privileged objects served to make them highly salient, and the production system had a difficult time blocking the intrusion of such information. These results localize perspective-taking errors to a stage of processing, grammatical encoding, that is outside speakers' executive control. Additionally, the results suggest not only that leaked information may be information speakers want to keep private, but that attempts to conceal it might make its leakage even more likely. If so, these results are likely to be relevant to interactions involving everything from interpersonal interactions to adversarial negotiation.


The Face of Bimodal Bilingualism

Jennie Pyers

+ more

Research with bilinguals indicates that the lexicons of both languages are active even during language-specific production. However, it is unclear whether the grammars of both languages are similarly active. For bimodal (sign-speech) bilinguals, the articulators of their two languages do not compete, enabling elements of ASL to appear during English production. Because ASL uses grammatical facial expressions to mark structures like conditionals and wh-questions--raised brows and furrowed brows respectively--we hypothesized that these nonmanual markers might easily be produced when bimodal bilinguals speak English.

12 bimodal bilinguals and 11 non-signing English speakers were paired with non-signing English speakers. We additionally paired the same 12 bimodal bilinguals with a Deaf native signer to elicit the same structures in ASL. We elicited conditional sentences by asking participants to tell their interlocutor what they would do in 6 hypothetical situations. Wh-questions were elicited by having participants interview their interlocutor to find out 9 specific facts. We recorded and coded the facial expressions that co-occurred with the spoken English sentences.

For bimodal bilinguals, there was no difference between the proportion of conditionals produced with raised brows in the ASL and English conditions. We observed a significant difference between the bimodal bilinguals and the non-signers in the proportion that occurred with a raised brow. And the bimodal bilinguals timed the raised brow with the onset of the conditional clause, indicating that these raised brows were grammatical and those produced by the non- signers were gestural. The fact that the non-signers frequently produced a raised brow with conditionals points to the co-speech gestural origins of the conditional non-manual.

When producing English wh-questions, the bimodal bilinguals produced furrowed brows significantly less often than they did for ASL wh- questions, but significantly more often than the non-signers, who rarely furrowed their brows. Because the bimodal bilinguals did not completely suppress ASL grammatical facial expressions while speaking English, we conclude that both languages are simultaneously active in the bilingual brain.

While speaking English, bimodal bilinguals produced the wh-nonmanual less frequently than the conditional nonmanual. We argue that this difference arises from competition with affective and conversational facial expressions. Raised brows for non-signers carry positive affect and indicate an openness to communicate (Janzen & Shaeffer, 2002; Stern, 1977). The facial grammar of ASL conditionals would not affectively compete with this co-speech facial gesture. The furrowed brow is a component of the anger expression and the puzzled expression, and could be misinterpreted by non-signers (Ekman, 1972). As a result, bimodal-bilinguals produce the ASL facial grammar with English wh-questions less often.

This study illuminates the gestural origins of ASL nonmanual markers, informs current accounts of ASL facial grammar, and reveals the impact of modality on the nature of bilingualism.


Optionality in Comparative Production

Jeremy Boyd & Bob Slevc

+ more

Why do grammatical options exist in a language? Having to choose between different ways of expressing a given meaning (e.g., the dative alternation, or -er versus more comparatives) might make production and comprehension more difficult. Alternatively, grammatical options might offer certain advantages (Bock, 1982). Corpus analyses by Mondorf (2003) found that, for adjectives that alternate in comparative form (e.g. angrier ~ more angry), the more variant tends to occur more often in syntactically complex environments. Mondorf explains this pattern of results by making the following claims:

(1) The distribution of -er and more comparatives is due to processing considerations.
(2) Speakers increase use of the more variant in syntactically complex environments to help listeners.
(3) Use of more helps listeners by simplifying parsing, and acting as a conventionalized warning of upcoming complexity.

These arguments, however, deserve closer scrutiny. First, corpus data is not ideally suited to making claims about processing. Second, while it is perfectly reasonable to assume that speakers might choose between linguistic alternatives based on a consideration of listener needs (Temperley, 2003), it may also be that speakers choose between options based on their own processing demands, and not on listener-based factors (Ferreira & Dell, 2000). The current set of experiments used an elicited production methodology to address the following issues:

(A) Whether speakers do, in fact, choose between morphological alternatives based on processing factors.
(B) Which kinds of processing complexities might be relevant to the choice between -er and more.
(C) Whether speakers choices are based on listeners needs, or the demands of their own production processes.


Foveal splitting causes differential processing of Chinese orthography in the male and female brain - Computational, behavioural, and ERP explorations

Janet Hsiao

+ more

In Chinese orthography, a dominant structure exists in which the semantic information appears on the left and the phonetic information appears on the right (SP characters); the opposite structure also exists, with the semantic information on the right and the phonetic information on the left (PS characters). Recent research on foveal structure and reading suggests that the two halves of a centrally fixated character may be initially projected and processed in different hemispheres. Hence, Chinese SP and PS characters may have presented the brain with different processing problems.

In this talk, I will present three studies examining the interaction between foveal splitting and structure of Chinese SP and PS characters. In computational modelling, we compared the performance of a split-fovea architecture and a non-split architecture in modelling Chinese character pronunciation. We then examined the predictions from the two models with a corresponding behavioural experiment and an ERP study. We showed that SP and PS characters create an important opportunity for the qualitative processing differences between the two cognitive architectures to emerge, and that the effects of foveal splitting in reading extend far enough into word recognition to interact with the gender of the reader in a naturalistic reading task.


Grammatical and Coherence-Based Factors in Pronoun Interpretation

Laura Kertz

+ more

We describe pronoun interpretation experiments in which a Coherence Hypothesis is tested against three preference-based systems of pronoun interpretation: the Subject Preference Hypothesis, the Parallel Structure Hypothesis, and the Modified Parallel Structure Hypothesis. We demonstrate that 'preferences' can be systematically disrupted through the manipulation of coherence, and that only the Coherence Hypothesis can predict the full range of co-reference patterns observed.


Deciphering the Architecture of the Spoken Word Recognition Systeme

Arty Samuel

+ more

Most current models of spoken word recognition assume that there are both lexical and sublexical levels of representation for words. The most common view is that speech is initially coded as sets of phonetic features, with some intermediate recoding (e.g., phonemes) before it is mapped onto lexical representations. There is a longstanding debate about whether the information flow through such an architecture is entirely bottom-up, or whether there is also top-down communication from the lexical level to the phonemic codes.

The selective adaptation procedure offers a particularly effective way to address this debate, because it provides a test that relies on the consequences of top-down lexical effects, rather than on a direct subjective report. Three sets of experiments use this approach to decipher the word recognition system's architecture. One set uses lexically-based phonemic restoration to generate the adapting sounds, and a second set uses a similar approach based on the "Ganong" effect. The third set extends this approach to audiovisual lexical adaptation, combining the technique with a "McGurk" effect manipulation. Collectively, the studies clarify how visual and auditory lexical information are processed by language users.


Gap-filling vs. filling gaps: An ERP study on the processing of subject vs. object relative clauses in Japanese

Mieko Ueno

+ more

Using event-related brain potentials (ERPs), we investigated the processing of Japanese subject/object relative clauses (SRs /ORs). English ORs take longer to read (King & Just, 1991), increase PET/fMRI activation (Just, et al. 1996; Caplan et al., 2000, 2001), and elicit left-lateralized/bilateral anterior negativity (LAN) between fillers and gaps (King & Kutas, 1995), which is largely attributed to a longer filler-gap distance. Contrarily, gaps in Japanese relative clauses precede their fillers, and the linear gap-filler distance is longer in SRs than in ORs. Nevertheless, Japanese ORs take longer to read (Ishizuka et al., 2003; Miyamoto & Nakamura, 2003), perhaps because in both English and Japanese, ORs involve a longer structural filler-gap/gap-filler distance in their syntactic representations (O'Grady, 1997). We investigated how gap-filler association in Japanese would compare to filler-gap association in English, and whether it is linear or structural distance that determines comprehension difficulty. Stimuli included SRs/ORs transliterated as:

SR [__new senator-A attacked] reporter-D-T long-term colleague-N existed OR [ new senator-N__attacked] reporter-D-T long-term colleague-N existed

'The reporter [who __ attacked the new senator]/[who the new senator attacked __ ] had a long-term colleague'

ORs in comparison to SRs elicited frontal negativity at the embedded verb and head-noun regions, and long-lasting centro-posterior positivity starting at the head-noun. The former may indicate that both storage and subsequent retrieval of a filler are associated with LAN (Kluender & Kutas, 1993), and the latter may index syntactic integration costs of a filler (Kaan et al., 2000), suggesting similar parsing operations for filler-gap/gap-filler dependencies. Finally, our data are better correlated with structural rather than linear distance.


Perceptual learning for speakers?

Tanya Kraljic

+ more

Listeners are able to quickly and successfully adapt to variations in speaker and in pronunciation. They are also able to retain what they have learned about particular speakers, and rapidly access that information upon encountering those speakers later. Recent research on perceptual learning offers a possible mechanism for such adaptations: it seems that listeners accommodate speakers' pronunciations by adjusting their own corresponding phonemic categories (Norris, McQueen & Cutler, 2003). Such adjustments can be retained for at least 25 minutes, even with intervening speech input (e.g., Kraljic & Samuel, 2005).

However, the specificity of perceptual learning with respect to particular speakers (and consequently, its implications for linguistic representation or organization) is not yet clear. Might particular perceptual information be preserved with respect to higher-level information about speaker identity, or do the adjustments rely on acoustic details? What happens when different speakers pronounce the same sound differently? Conversely, what happens when a sound is pronounced in the same 'odd' way but for different reasons (e.g., due to some idiosyncrasy of the speaker versus due to a dialectal change)? I will describe findings from a program of research that investigates these questions and others. I will also discuss how perceptual adjustments may or may not translate to adjustments in production, which often serve quite a different functional role than perceptual adjustments do.


Thematic Role and Event Structure Biases in Pronoun

Interpretation Hannah Rohde (joint work with Andy Kehler and Jeff Elman)

+ more

The question of whether pronouns are interpreted based primarily on surface-level morphosyntactic cues (subjecthood, recency, parallelism) or as a byproduct of deeper discourse-level processes and representations (inference, event structure) remains unresolved in the literature. These two views come together in a sentence-completion study by Stevenson et al. (1994), in which ambiguous subject pronouns in passages such as (1) were resolved more frequently to the (to-phrase object) Goal of a previous transfer-of-possession event rather than the (matrix subject) Source.

(1) John handed the book to Bob. He _________.

Stevenson et al. considered two explanations for this result: a thematic role bias for Goals over Sources, and an event-structure bias toward focusing on the end state of such events. To distinguish these hypotheses, we ran an experiment that compared the perfective ("handed") and imperfective ("was handing") forms of the transfer verb. The thematic role relations are equivalent between the two versions, but the imperfective, by describing an event as an ongoing process, is incompatible with a focus on the end state of the event. We found significantly more resolutions to the Source for the imperfective passages as compared to the perfective ones, supporting the event-structure explanation. Our results show that participants' interpretations of the ambiguous pronouns appear to reflect deeper event-level biases rather than superficial thematic role preferences. These findings will be presented within a broader model of discourse coherence and reference.


The Face of Bimodal Bilingualism

Jennie Pyers

+ more

Research with bilinguals indicates that the lexicons of both languages are active even during language-specific production. However, it is unclear whether the grammars of both languages are similarly active. For bimodal (sign-speech) bilinguals, the articulators of their two languages do not compete, enabling elements of ASL to appear during English production. Because ASL uses grammatical facial expressions to mark structures like conditionals and wh-questions--raised brows and furrowed brows respectively--we hypothesized that these nonmanual markers might easily be produced when bimodal bilinguals speak English.

12 bimodal bilinguals and 11 non-signing English speakers were paired with non-signing English speakers. We additionally paired the same 12 bimodal bilinguals with a Deaf native signer to elicit the same structures in ASL. We elicited conditional sentences by asking participants to tell their interlocutor what they would do in 6 hypothetical situations. Wh-questions were elicited by having participants interview their interlocutor to find out 9 specific facts. We recorded and coded the facial expressions that co-occurred with the spoken English sentences.

For bimodal bilinguals, there was no difference between the proportion of conditionals produced with raised brows in the ASL and English conditions. We observed a significant difference between the bimodal bilinguals and the non-signers in the proportion that occurred with a raised brow. And the bimodal bilinguals timed the raised brow with the onset of the conditional clause, indicating that these raised brows were grammatical and those produced by the non- signers were gestural. The fact that the non-signers frequently produced a raised brow with conditionals points to the co-speech gestural origins of the conditional non-manual.

When producing English wh-questions, the bimodal bilinguals produced furrowed brows significantly less often than they did for ASL wh- questions, but significantly more often than the non-signers, who rarely furrowed their brows. Because the bimodal bilinguals did not completely suppress ASL grammatical facial expressions while speaking English, we conclude that both languages are simultaneously active in the bilingual brain.

While speaking English, bimodal bilinguals produced the wh-nonmanual less frequently than the conditional nonmanual. We argue that this difference arises from competition with affective and conversational facial expressions. Raised brows for non-signers carry positive affect and indicate an openness to communicate (Janzen & Shaeffer, 2002; Stern, 1977). The facial grammar of ASL conditionals would not affectively compete with this co-speech facial gesture. The furrowed brow is a component of the anger expression and the puzzled expression, and could be misinterpreted by non-signers (Ekman, 1972). As a result, bimodal-bilinguals produce the ASL facial grammar with English wh-questions less often.

This study illuminates the gestural origins of ASL nonmanual markers, informs current accounts of ASL facial grammar, and reveals the impact of modality on the nature of bilingualism.


Rachel Mayberry

+ more

How does the timing of language acquisition constrain its ultimate outcome? In series of experiments we have found that linguistic experience in early childhood affects subsequent language processing and learning ability across modalities and languages. Specifically, adults who acquired a language in early life can perform at near-native levels on subsequently learned, second languages regardless of whether they are hearing or deaf or whether their early language was signed or spoken.

By contrast, a paucity of language in early life leads to weak language skill in adulthood across languages and linguistic structures as shown by a variety of psycholinguistic tasks, including grammatical judgment, picture-to-sentence matching, lexical access, and reading comprehension. These findings suggest that the onset of language acquisition during early human development dramatically alters the both capacity to learn and process language throughout life, independent of the sensory-motor form of the early experience.


Motor learning as applied to treatment of neurologically based speech disorders

Don Robin

+ more

This seminar will provide an overview of principles of motor learning with special reference to speech motor learning in adults and children with apraxia of speech. In particular, I will present an overview of a number of studies in our laboratory and how they fit with the broader literature on motor learning.


External/Internal status neither explains the frequency of occurrence nor the difficulty of comprehending reduced relative clauses

Mary Hare/Ken McRae

+ more

McKoon and Ratcliff (Psychological Review, 2003) argue that reduced relatives like The horse raced past the barn fell are incomprehensible because the meaning of the RR construction requires a verb with an event template that includes an external cause (EC). Thus, reduced relatives with internal cause (IC) verbs like race are "prohibited". Their corpus analyses showed that reduced relatives are common with EC but rare with IC verbs.

Alternatively, RRs may be rare with IC verbs because few of these occur in the passive. Those that do, however, should make acceptable RRs, with ease of comprehension related to difficulty of ambiguity resolution rather than the IC/EC distinction. In two experiments, we show that English speakers willingly produce RRs with IC verbs, and judge their acceptability based on factors known to influence ambiguity resolution. Moreover, a regression model on our own corpus data demonstrates that frequency of passive, not IC/EC status, predicts RR frequency in parsed corpora. In summary, although there do exist reasons why the IC/EC distinction may be important for language use, this dichotomous distinction does not explain people's production or comprehension of sentences with reduced relative clauses. In contrast, factors underlying ambiguity resolution do.


Presentation on Wernicke's Aphasia

Nina Dronkers

+ more

This presentation is the second in a series of talks on aphasia, a disorder of language due to injury to the brain. This presentation will concern Wernicke's aphasia, the type of aphasia that affects the lexical-semantic system without affecting motor speech production. An individual with Wernicke's aphasia has kindly agreed to be interviewed in front of the audience, and will teach us, first-hand, about the effects of brain injury on the language system. This interview will be followed by a lecture on Wernicke's aphasia as well as its relationship to Wernicke's area of the brain. In addition, Wernicke's aphasia will be discussed in relation to semantic dementia, a neurodegenerative disorder that is often confused with Wernicke's aphasia.


Mechanisms for acoustic pattern recognition in a song bird

Timothy Gentner

+ more

The learned vocal signals of song birds are among the most complex acoustic communication signals, and offer the opportunity to investigate perceptual and cognitive mechanisms of natural stimulus processing in the context of adaptive behaviors. European starlings sing long, elaborate songs composed of short spectro-temporally distinct units called "motifs". I review studies that point out the critical importance of motifs in the song recognition, and then show how experience dependent plasticity acts to modify the single neuron and ensemble level representation of motifs in starlings that have learned to recognize different songs. Beyond the recognition of spectro-temporal patterning at the motif level, starlings also attend to statistical regularities in the sequential patterning of motifs within songs. Recent results demonstrate that starlings can learn to use arbitrary rules that describe the temporal patterning of motif sequences, including at least one rule that meets the formal definition of a non-regular context-free grammar -- an ability hypothesized as uniquely human. I discuss these data in the context of comparative models for vocal pattern recognition and syntactic processing.


Eileen Cardillo

+ more


Noriko Hoshino

+ more


Mechanisms for acoustic pattern recognition in a song bird

Timothy Gentner

+ more

The learned vocal signals of song birds are among the most complex acoustic communication signals, and offer the opportunity to investigate perceptual and cognitive mechanisms of natural stimulus processing in the context of adaptive behaviors. European starlings sing long, elaborate songs composed of short spectro-temporally distinct units called "motifs". I review studies that point out the critical importance of motifs in the song recognition, and then show how experience dependent plasticity acts to modify the single neuron and ensemble level representation of motifs in starlings that have learned to recognize different songs. Beyond the recognition of spectro-temporal patterning at the motif level, starlings also attend to statistical regularities in the sequential patterning of motifs within songs. Recent results demonstrate that starlings can learn to use arbitrary rules that describe the temporal patterning of motif sequences, including at least one rule that meets the formal definition of a non-regular context-free grammar -- an ability hypothesized as uniquely human. I discuss these data in the context of comparative models for vocal pattern recognition and syntactic processing.


Gestures worth a thousand words: Commonalities and differences in gesture and picture comprehension.

Ying Wu

+ more

Conversation is frequently accompanied by gestures which depict visuo-semantic features related to the content of the talk in progress. Does the capacity to construct meaning through gesture engage processes and neural substrates similar to those recruited in the comprehension of image-based representations of the visual world? This talk will compare event related potentials (ERPs) elicited by photographs of common objects and iconic co-speech gestures. Previous research has demonstrated that the second member of an unrelated picture pair results in an enhanced negative going deflection of the ERP waveform (N400) as compared to responses elicited by related picture probes. An earlier negative going component . the N300 . has also been found to exhibit sensitivity to manipulations of semantic relatedness. If the comprehension of pictures and gestures is mediated by overlapping systems, similarly distributed effects of congruency on the N300 and N400 components should be observed.

These predictions were addressed by extracting still images from videotaped segments of gestures in order to elicit brain responses comparable to those elicited by pictures. 16 healthy adults viewed contextually congruous and incongruous gesture stills, dynamic gestures, and photographs of common objects. N400 effects were observed in response to static and dynamic gestures, as well as pictures. Static gesture stills and pictures also elicited N300 effects with similar distributions, suggesting overlap in the systems mediating some aspects of gesture and picture comprehension. However, differences in the overall morphology of ERP waveforms suggest non-identical neural sources as well.


How adults and children detect meaning from words and sounds: An ERP study

Alycia Cummings

+ more

This study examined differences in neural processing of meaningful (words and natural sounds) vs. non-meaningful (sounds) information and of meaningful information presented in the form of words vs. natural sounds. Event-related potentials (ERP) were used to obtain precise temporal information. Action-related object pictures were presented with either a word or natural sound, and non-meaningful drawings were paired with non-meaningful sounds. The subjects pressed a button indicating whether the picture and sound matched or mismatched. Non-meaningful stimuli were matched by “smoothness”/”jaggedness”.

In both adults and children, words and environmental sounds elicited similar N400 amplitudes, while the non-meaningful sounds elicited significantly smaller N400 amplitudes. While there were no left hemisphere differences, the right hemisphere neural networks appeared to be more active during environmental sound processing than during word processing. The meaningful sounds showed similar scalp distributions, except at the most anterior electrode sites, In adults the environmental sound N400 latency was significantly earlier than the word latency, while there were no reaction time differences.

As compared to the non-meaningful stimuli, meaningful sounds and words elicited widespread activation, which might reflect neural networks specialized to process semantic information. However, there was some evidence for differences in neural networks processing lexical versus meaningful, non-lexical input.


"This is a difficult subject"

Masha Polinsky and Robert Kluender

+ more

Subject-object asymmetries are well documented in linguistic theory. We review a variety of evidence from child language acquisition, normal adult sentence processing, language and aging studies, and cross-linguistic patterns supporting the notion that subjects present particular difficulties to language users and are therefore in a class by themselves. Using notions from information structure and judgment types (thetic vs categorical), we explore some avenues for addressing the intrinsic difficulty of subjects.


Is there a processing advantage for analytic morphology? Evidence from a reading-time study of English comparatives

Jeremy Boyd

+ more

Is there a processing advantage for analytic morphology? Evidence from a reading-time study of English comparatives

In English, adjectives can be inflected for comparison in two different ways: through '-er' suffixation (bigger, happier), or via syntactic combination with 'more' (more belligerent, more romantic). The first option--where the comparative occurs as a single word--is referred to as SYNTHETIC MORPHOLOGY. The second option--in which the comparative is realized as multiple words--is called ANALYTIC MORPHOLOGY. There is some reason to believe that analytic realization confers certain advantages that synthetic realization does not. Creoles, for example, tend to favor analytic morphology (Bickerton, 1981; 1984). Some researchers claim that this fact indicates that analytic morphology is inherently easier to handle.

Mondorf (2002; 2003) developed a specific hypothesis along these lines. In corpora analyses of adjectives that fluctuate between synthetic and analytic versions (e.g. prouder ~ more proud, crazier ~ more crazy), she found that the presence of complex syntactic environments immediately following the comparative--e.g. to-complements, as in "Some news items are more fit to print than others"--seemed to trigger the analytic variant. Mondorf argues that the analytic version is favored in these circumstances because it helps to mitigate complexity effects. She acknowledges, however, that "there is no independent empirical evidence that the analytic variant serves as a signal foreshadowing complex structures, is easier to process or [is] in other ways more suited to complex environments" (2003: 253).

In the present talk, I present results from a self-paced reading-time study that bear on these issues. Subjects were asked to read sentence pairs like the following:

Analytic Condition: Highway 95 is more pleasant TO drive during the summer months.
Synthetic Condition: Highway 95 is pleasanter TO drive during the summer months.

Reading times were recorded and compared across Analytic and Synthetic conditions to see whether there was a facilitated reading time for 'to' (in CAPS, above) when a 'more' comparative was used. Analysis shows that this was indeed the case. Whether this result really indicates an analytic processing advantage--versus an effect of grammaticality and/or frequency--will be addressed.


Cross-Category Ambiguity and Structural Violations: Why "Everyone Likes to Glass" and "Nobody Touches the Agree"

Ryan Downey

+ more

Previous research suggests that violations during sentence processing may result in characteristic Event-Related Potential (ERP) patterns. One particular component, the Early Left Anterior Negativity (ELAN), has been elicited primarily in German after phrase structure violations with the following form:

1 Das Baby wurde gef?ttert.
The baby was fed.

2 *Die gans wurde im gef?ttert.
*The goose was in-the fed.

Friederici et al. use the elicitation of an ELAN in phrase structure violations such as this (i.e., the reader encounters a verb when expecting a noun) as evidence that the brain is sensitive to syntactic information via an extremely early (100-250 msec) first pass parse.

To investigate what types of information the parser may be sensitive to, the present study investigated phrase structure violations during auditory processing of English sentences. Stimuli were constructed that were category-unambiguous (i.e., could only be a noun vs. could only be a verb) and frequency-biased category-ambiguous (i.e., could be used as a noun or verb, but exhibited "preference" for one). Initial results suggested an early frontal negativity to unambiguous phrase structure violations, but only when listeners heard a noun when they were expecting a verb (the opposite structure than that studied by Friederici et al.); the other violation - hearing a verb when expecting a noun - resulted in an unanticipated early (mostly) left anterior positivity. There were no significant ERP differences in the word-category ambiguous "violations". Post-hoc comparisons taking into account word concreteness yielded a potential explanation for the unpredicted initial findings. Possible alternative interpretations will be discussed. Results indicate that ERPs
are useful in investigating the processing of various types of information during phrase structure violations in English.


Individual differences in second language proficiency:
Does musical ability matter?

Bob Slevc

+ more

This study examined the relationship between musical ability and second
language (L2) proficiency in adult learners. L2 ability was assessed in four
domains (receptive phonology, productive phonology, syntax, and lexical knowledge), as were various other factors that might explain individual differences in L2 ability, including age of L2 immersion, patterns of language use and exposure, phonological short-term memory, and motivation. Hierarchical regression analyses were conducted to determine if musical ability explains any unique variance in each domain of L2 ability after controlling for other relevant factors. Musical ability predicted ability with L2 phonology (both receptive and productive) even when controlling for other factors, but did not explain unique variance in L2 syntax or lexical knowledge. These results suggest that musical skills can supplement the acquisition of L2 phonology and add to a growing body of evidence linking language and music.


CRL Research

Masha Polinsky and Vic Ferreira

+ more

Masha Polinsky and Vic Ferreira will be talking about research that is currently underway at CRL.


Processing focus violations: Comparing ERP and eye-tracking data

Wind Cowles

+ more

Processing focus violations: Comparing ERP and eye-tracking data The linguistic focus in an answer to a wh-question must correspond to the wh-phrase in the question. When focus is mis-assigned and this correspondence is not possible, the answer becomes infelicitous, even when it provides the information asked for by the question. An example of this can be seen in (1), with the focus of the answers indicated by all caps:

(1) Who did the queen silence, the banker or the advisor?
a. It was the BANKER that the queen silenced.
b. #It was the QUEEN that silenced the banker.

In this talk I'll address how comprehenders respond to the kind of focus violation shown in (1b) by presenting the results of experiments using ERP and eye-tracking methodologies. The results of these studies provide converging evidence that such violations are treated by comprehenders as essentially semantic in nature. I will discuss these results in terms of (a) how comprehenders use focus information during processing and (b) the additional information that such direct comparison of ERP and eye-tracking data can provide.


Processing and syntax of control structures in Korean

Nayoung Kwon & Maria Polinsky

+ more

Korean shows the following productive alternation in object control:

i. John-NOM Maryi-ACC [ei to leave] persuaded
ii. John-NOM ei [Maryi-NOM to leave] persuaded

Primary linguistic data (Monahan 2004) indicate that (ii) must be analyzed as a form of backward object control (BC). This study was designed to look for processing evidence supporting the BC analysis.

Previous experimental studies have shown that cataphoric relations take longer to process than anaphoric relations (Gordon & Hendrick 1997, Sturt 2002, Kazanina & Phillips 2004). This predicts that BC (1b) should elicit slower reading time (RT) than forward control (FC, 1a).

(1) ‘The marketing department of the production persuaded the heroine to appear on a popular talk show to advertise the movie.’
(a) W7heroinei-acc [ei W8popular W9talk_show-to W10go-comp] W11persuaded FC
(b) ei W7[heroinei-nom W8popular W9talk_show-to W10go-comp] W11persuaded BC
(c) [ei W7popular W8talk_show-to W9go-comp]j W10heroinei-acc tj W11persuaded scrambled FC

To test these predictions, a self-paced reading time (RT) study of Korean control was conducted using FC (1a), BC (1b), and arguably scrambled FC (1c) (n=40, each type, 23 subjects). At words 7 and 10, FC (1a) was processed significantly faster than BC (1b). Because of word order differences in scrambled FC (1c), RT from W7 to W10 was collapsed; RT in that region was greater for BC than for both FC types. The difference between scrambled and unscrambled FC was non-significant.

These results provide experimental evidence for the psychological reality of backward control. While slower RT at W7 may be due either to clause-boundary effects (Miyamoto 1997), the effect at W10 is unambiguously due to BC, as the parser back-associates the overt controller with the gap.

Control as A-movement: Evidence from the processing of forward and backward control in Korean.


Imitation and language learning

Michael Ramscar

+ more

In this talk I'll present a series of studies from my lab showing that children can master irregular plural forms simply by repeating erroneous over-regularized versions of them. We model and predict this phenomenon in terms of successive approximation in imitation - children produce over-regularized forms because the representations of frequent, regular items develop more quickly, such that at the earliest stages of production they interfere with children's attempts to imitatively reproduce irregular forms they have heard in the input. As the strength of the representations that influence children's productions settle asymptotically, the early advantage for frequent forms is negated, and children attempts to imitate the forms they have heard are probabilistically more likely to succeed (a process that produces the classic U-shape of children's acquisition of inflection). These data show that imitation allows children to acquire correct linguistic behavior in a situation where, as a result of philosophical and linguistic analyses, it has often been argued that it is logically impossible for them to do so. Depending on how time is going, I'll then discuss how imitation allows signing children to "invent language", why more imitation might help adults better learn a second languages, and other primates a first.


Grant Goodall Department of Linguistics, UCSD

+ more

'Syntactic satiation' is the phenomenon in which a sentence that initially sounds bad starts to sound noticeably better with repeated exposure. Snyder (2000) has shown that this phenomenon can be induced experimentally but that only some unacceptable sentence types are susceptible. In this talk, I present the results of an experiment which attempts to shed light on whether satiation effects can still be induced even when the lexical items are varied in each presentation (in Snyder's study they were not), whether satiation can be induced in other languages, and whether satiation can be applied usefully to determine the source of unacceptability of sentences in one language or across languages. I will focus on cases of illicit lack of inversion in wh-questions in English and Spanish (e.g., *What John will buy? and *Qué Juan compró?) and I will show that satiation is observed in Spanish but not in English in these cases, suggesting that different mechanisms underlie inversion in the two languages.


Brain potentials related to negation and sentence verification

Lea Hald

+ more

Surprisingly little is known about the relative time courses of establishing the meaning and truth of linguistic expressions. A previous ERP study by Hald & Hagoort (2002) utilizing the N400 effect indicated that during on-line sentence comprehension, world knowledge information needed to determine the truth value of a sentence is integrated as quickly as lexical semantic information. However, an earlier ERP study by Fischler, Bloom, Childers, Roucos and Perry (1983) found that the N400 reflected a preliminary stage of sentence comprehension rather than the ultimate falseness of the sentence. Using sentences like the following, Fischler et. al found that for negative sentences the N400 reflected a mismatch between terms (robin, tree) at a preliminary stage of processing.

True, affirmative A robin is a bird.

False, affirmative A robin is a tree. (N400 for tree)

True, negative A robin is not a tree. (N400 for tree)

False, negative A robin is not a bird.

One possible explanation for the Fischler et al. results is that the sentences used always contained a categorical relationship between the first noun and the critical noun in the sentence (such as robin - bird).

In order to investigate this hypothesis we tested the original Fischler items in addition to items which did not contain this category relationship. The new items were like the following:

True, affirmative Hawaii is tropical.

False, affirmative Hawaii is cold.

True, negative Hawaii is not cold.

False, negative Hawaii is not tropical.

Contrary to the hypothesis that the original Fischler et al. results were a reflection of a categorical relationship between the first noun and the target noun, preliminary data indicate these new items replicate the original pattern of results. A discussion of these results in relationship to the N400 and sentence processing will follow.


Listening to speech activates motor areas involved in speech production

Stephen Wilson

+ more

Language depends upon the maintenance of parity between auditory and articulatory representations, raising the possibility that the motor system may play a role in perceiving speech. We tested this hypothesis in a functional magnetic resonance imaging (fMRI) study in which subjects listened passively to monosyllables, and produced the same speech sounds.

Listening to speech consistently activated premotor and primary motor speech production areas located on the precentral gyrus and in the central sulcus, supporting the view that speech perception recruits the motor system in mapping the acoustic signal to a phonetic code.


"Redefining semantic and associative relatedness"

Ken McRae U. of Western Ontario
Mary Hare Bowling Green State University Patrick Conley, U. of Western Ontario

+ more

The concepts of semantic and associative relatedness are central in both psycholinguist and memory research. However, over time the definition of semantic relatedness has become overly narrow(limited to category co-ordinates), whereas the operationalization of associative relatedness (word association norms) has become its definition. These facts have led to confusion in the semantic memory and language understanding literatures, both theoretically and methodologically. The goals of this research are to redefine and resituate semantic and associative relatedness (and thus the structure of semantic memory), argue that "mere association" does not exist, re-evaluate the priming literature in this new light, and offer suggestions regarding future research.


Morphological Universals and the Sign Language Type

Mark Aronoff, Irit Meir, Carol Padden, & Wendy Sandler

+ more

The morphological properties that vary across the world's languages often come in clusters, giving rise to a typology. Underlying that typology are more general properties, found in most of the world's languages, and claimed to be universals. Natural sign languages define a new category that is at once typological and fully general: they appear to be characterized universally by modality specific morphological properties. Many of these properties, taken individually, are not outside the range of morphological possibilities found in spoken languages. It is the predictability with which the properties cluster in sign languages, together with the rapidity with which they develop in these young languages, that define the language type.

In addition to modality driven universals, sign languages we have studied also show language particular processes that are more directly comparable to those of spoken languages. Our goal is to identify universal features of morphology in human language that underlie both.

The sign language universal process we describe here is verb agreement.

The system has regular and productive morphological characteristics that are found across all sign languages that have been well studied: (1) Only a subset of verbs are marked for agreement (Padden, 1988). (2) That subset is predictable on the basis of their semantics; they involve transfer (Meir, 1998). (3) The grammatical roles that control agreement are source and goal. (4) The system is fully productive. (5) The formal instantiation of agreement is simultaneous rather than sequential. Our claim is that the universality of this system in sign languages, and the relatively short time span over which it develops, derive from the interaction of language with the visuo-spatial domain of transmission.

Yet at the same time, as its label suggests, verb agreement in sign languages follows the same syntactic restrictions as in spoken

languages: in all languages, verbs may agree only with indexically identifiable properties of their subjects and objects (person, number, and gender in spoken languages; referential indices in sign languages).

This indicates that the mechanism of agreement is universally available to human language (Aronoff, Meir, & Sandler, 2000).

We present new evidence that even iconic, sign language universal morphology does not arise overnight. Current work on a new, isolated sign language used in a Beduin village reveals the kernels of verb agreement that have not yet developed into a full-fledged morphological system. We conclude that: (1) universal morphological properties underlie sign language typical grammar, (2) modality of transmission can have a profound influence on grammatical form, and (3) despite the predictable influence of modality on language form, the normal course of language development and change is detectable in sign language.


Language movement in Scientific Discourse

Robert Liebscher
Richard K. Belew

+ more

We focus on academic research documents, where the date of publication undoubtedly has an effect both on an author's choice of words and on a field's definition of underlying topical categories. A document must say something novel and also build upon what has already been said. This dynamic generates a landscape of changing research language, where authors and disciplines constantly influence and alter the course of one another.


Syntactic persistence in non-native language production

Susanna Flett School of Philosophy, Psychology & Language Sciences University of Edinburgh

+ more

A key aim of second language (L2) research is to determine how syntactic representations and processing differ between native and non-native speakers of a language. My PhD work is focused on using syntactic priming tasks to investigate these differences. Syntactic priming refers to the tendency people have to repeat the type of sentence construction used in an immediately preceding, unrelated sentence. This effect suggests the existence of mental representations for particular syntactic constructions, independent of particular words and meanings.

I will describe a study from my first year project which used a dialogue task and a computerized task to look at priming of actives and passives in Spanish. Participants were native speakers, intermediate L2 speakers and advanced L2 speakers of Spanish (for whom English was the L1). Results demonstrated a significantly stronger priming effect in the L2 speakers compared with the native speakers. This may be explained by passives being more common in English than Spanish, and this preference being transferred to the L2. In addition, for L2 speakers the message-to-syntax mappings will be relatively weaker than those in a native speaker and so more susceptible to priming manipulations. I will discuss these results and describe plans for future studies using this technique to look at L2 speakers.


Discourse Adjectives

Gina Taranto

+ more

In this talk I introduce Discourse Adjectives (DAs), a natural class whose members include apparent, evident, clear, and obvious, as in:

(1) a. It is clear that Briscoe is a detective.

      b. It is clear to you and me that Briscoe is a detective.

Of primary concern are the semantics of DAs in sentences like (1a), in which the conceptually necessary experiencer of clear is not expressed syntactically, and is interpreted much like (1b), with the relevant experiencers of clarity interpreted as the discourse participants - that is, both the speaker and the addressee.

I argue that the meanings of utterances such as (1a) are highly unusual semantically, in that they operate entirely on a metalinguistic level. Interlocutors use such utterances to provide information about their conversation rather than their world. Sentence (1a) does not provide new information about Briscoe, rather, it provides information about the interlocutor's beliefs about the designated proposition, in terms of the current conversation.

My analysis begins with a Stalnakerian model of context-update, as formalized by Heim (1982, 1983) and Beaver (2000). I augment this model with Gunlogson's (2001) representation of individual commitment sets of speaker and addressee within the Common Ground of a discourse, and Barker's (2002) compositional theory of vagueness.

My proposal relies on the (vague) degree of probability that the Discourse Participants assign to the truth of a proposition; the context-update effect of an utterance of (1a) removes from consideration those possible worlds in which the discourse participants do not believe the proposition expressed by Briscoe is a detective satisfies a vague minimum standard for 'clarity'. The semantics of utterances with DAs are shown to depend directly on probability, and only indirectly on truth. I argue that after an utterance with a DA is accepted into the Common Ground, interlocutors are licensed to proceed as if the designated proposition is true, if only for the current discussion.

DAs are argued to have the ability to publicly commit all discourse participants to the content of their complements. This is shown to have a synchronization effect on the Common Ground of a discourse, which explains how it can be useful to have an expression type that has no normal descriptive content.


"Cultural differences in non-linguistic rhythm perception: what is the influence of native language?"

John Iversen & Ani Patel

+ more

Experience with one's native language influences the way one hears speech sounds (e.g., phonemes), and enculturation in a particular musical tradition influences the perception of musical sound. However, there is little empirical research on cross-domain influences: Does one's native language influence the perception of non-linguistic sound patterns? To address this issue, we have focused on rhythm, an important dimension of both language and music. We examined the perception of one aspect of rhythm (grouping) by native speakers of English and Japanese, languages with distinct rhythmic structure. We constructed simple sequences of tones alternating in either amplitude (loud-soft), pitch (high-low), or duration (long-short). Non-musician listeners were asked to indicate their perceptual grouping of tone pairs (e.g., loud-soft or soft-loud) and the certainty of their judgment.

Native English speakers in San Diego and native Japanese speakers in Kyoto participated, each responding to a total of 32 stimuli. We found a dramatic difference between English and Japanese speakers in the perception of duration sequences: Japanese speakers preferentially chose a long-short grouping, while English speakers strongly preferred a short-long grouping. In contrast, no marked differences were seen in the other conditions. We examine the hypothesis that the rhythmic structure of language creates perceptual biases that influence non-linguistic rhythm perception. Specifically, we compare the rhythmic structure of Japanese and English words to see if long-short syllabic patterns are more common in Japanese than English, and vice-versa.


ERP Associated to Gender and Number Agreement during Syntactic Processing

Horacio Barber

+ more

Languages tend to represent gender as a conceptual characteristic or/and as a formal property of words. In contrast, number is always considered a conceptual feature signalling the quantity of the referent. Moreover, from a lexical point of view, gender information is probably retrieved directly from the word form, whereas number is considered a morphological marking that combines with the stem it modifies. These lexical features probably have relevant consequences on the syntactic level.

The role of grammatical gender and number representations in syntactic processes during reading in Spanish was studied in two different experiments. ERPs were recorded while Spanish speakers read word pairs (Experiment 1) or sentences (Experiment 2) in which gender or number agreement relationships were manipulated. Disagreement in word pairs formed by a noun plus an adjective (e.g., faro-alto [high- lighthouse]) produced a N400-type effect, while word pairs formed by an article plus a noun (e.g., el-piano [the-piano]) showed an additional left-anterior negativity effect (LAN). The agreement violations with the same words inserted in sentences (e.g., El piano estaba viejo y desafinado [the m-s piano m-s was old and off-key]) resulted in a LAN-P600 patron. Differences between grammatical gender and number disagreement were found in late measures. In the word pairs experiment, P3 peak latency varied across conditions, being longer for gender than for number disagreement. In a similar way, in the sentence experiment, the last segment of the P600 effect was larger for gender than for number violations. These ERP effects support the idea that reanalysis or repair processes after grammatical disagreement detection could involve more steps in the case of gender disagreement.


Locality, Frequency, and Obligatoriness in Argument Attachment Ambiguities

Lisa King

+ more

Within the context of human sentence comprehension, one intensely investigated issue is whether sentence processing is immediately influenced by non-structural information. One potential problem with previous studies is that the grammatical function of the ambiguous constituent was typically manipulated along with the attachment ambiguity under consideration. In (1) the prepositional phrase (PP) can either be a modifier of the verb (1a) or an argument of the noun (1b) (adapted from Clifton et al 1991).

(1) a. The man expressed his interest in a hurry during the storewide sale.

b. The man expressed his interest in a wallet during the storewide sale.

Some studies provide evidence that ambiguous constituents are preferentially processed as arguments (e.g. Schutze & Gibson 1999), whereas other studies show limited or no argument preference (Kennison 2002, Ferreira & Henderson 1990). It is therefore difficult to determine if there was any effect of grammatical function ambiguity in previous studies.

The experiments to be discussed employed a moving window reading paradigm to investigate such factors as locality, obligatoriness of the argument, and co-occurrence frequency, while holding constant the grammatical function of the ambiguous constituent (2). In (2a), the ambiguous PP complement must attach to the matrix verb. In (2b), the ambiguous PP complement must attach to the embedded verb.

(2) V[obligatory PP complement]-NP-that-NP-V[optional PP complement]-PP

a. Phoebe put the magazines that Jack left under the bed before she made it.

b. Phoebe put the magazines that Jack left under the bed into the closet.

Three experiments tested the predictions made for the sentences in (2) by the Garden Path Theory (GPT; Frazier 1979), the Dependence Locality Theory (DLT; Gibson 1998, 2000), the Late Assignment of Syntax Theory (LAST; Townsend & Bever 2000), and the Co-occurrence Frequency Hypothesis (CFH).
The GPT and the DLT predict that only the structure in (2a) should have a garden path. Conversely, the LAST predicts that only the structure in (2b) should have a garden path. A corpus analysis and a production task permitted the embedded verbs to be divided into three categories: neutral bias, bias for, and bias against a PP complement. The CFH predicts that the structure in (2a) should have a garden path when the embedded verb is biased for a PP complement, and the structure in (2b) should have a garden path when the embedded verb is biased against a PP complement. The results from all three experiments showed that the structure in (2a) had the pattern of reading difficulties predicted by the GPT and the DLT.
Turning to the structure in (2b), the results from the experiment using neutral-bias verbs were also consistent with the predictions made by the GPT and the DLT. The results from the two experiments which used biased verbs, however, showed patterns of reading difficulties that were not predicted by the GPT and the DLT, but may be accounted for by positing a role for the frequency of co-occurrence between the embedded verb and its optional PP complement.


Is there a dissociation between verbal and environmental sound processing in young children?

Alycia Cummings

+ more

This study directly compared 15-, 20-, and 25-month-old infants’ (n = 11, 15, and 14, respectfully) knowledge of familiar environmental sounds to their related verbal descriptions, i.e. "Moo" versus "Cow Mooing".
Children were also placed into one of two verbal proficiency groups: Low (<200 productive words) or High (>200 productive words). Using an online picture/auditory word-matching paradigm, where the aural stimuli were either environmental sounds or their linguistic equivalents, infants comprehension was measured for speed and accuracy in the identification of a target object.

Looking time accuracy improved across age levels (F=7.93, p<.001), demonstrating that some verbal and sound knowledge is related to life experience and/or maturational factors. Infants who were more verbally proficient also responded more accurately in the experiment than infants with small productive vocabularies (F=8.4, p<.006). The interaction between age group and linguistic domain was not significant, suggesting that children in each age group respond in similar manners to both speech and sound object labels. The interaction between CDI grouping and domain did reach significance: Infants with smaller productive vocabularies did respond more accurately to sound than to verbal labels, a differentiation between modalities that disappeared in children with larger vocabularies (F=10.03, p<.003).

Infants' looking time accuracy was also temporally sensitive. As more auditory information became available, all of the infants responded more accurately (F=41.35, p<.0001). This demonstrates that comprehension is not a static state, as even the youngest infants appeared to be constantly monitoring and updating their environmental representation.

This experiment provided no evidence for a delayed onset of environmental sound comprehension nor for the domain specificity of language. Since the youngest and most language inexperienced infants showed differential responding to sounds versus speech, environmental sounds could be a precursor to language by providing a bootstrap to later acquisition for some children. But the most consistent pattern was the finding that both speech and meaningful sounds appeared to co-develop at each age, thus contributing to the mounting evidence suggesting that recognition of words and recognition of familiar auditory sounds share a common auditory processor within the brain. Supposed "language specific" cognitive processes are now being implicated in what would otherwise be considered nonlinguistic tasks.


First language acquisition: Building lexical categories from distributional information.

Holger Keibel

+ more

Recent methodological advances in computational linguistics demonstrated that distributional regularities in linguistic data can be one reliable and informative source of evidence about lexical categories (noun, verb, etc.). This might help to explain the observed robustness of early grammatical development despite the noisy and incomplete nature of the language samples that infants are exposed to. In this context, Redington, Chater, & Finch (1998) and Mintz, Newport, & Bever (2002) extensively explored co-occurrence statistical approaches: Words which tend to co-occur with the same kinds of neighboring words were inferred to be members of the same category.

We applied this general paradigm to child-directed speech in large high-density corpora. Beyond verifying the potential usefulness of distributional information, we sought to identify the precise regularities which this usefulness mainly relies upon. The results not only account for the robustness of the co-occurrence approach, they also reveal why it is more informative about some categories than others. This might in turn help to explain empirical findings regarding the order in which lexical categories typically emerge in first language acquisition (e.g. Olguin & Tomasello, 1993; Tomasello & Olguin, 1993). The focus of my talk will be on the differences between the categories noun and verb.


"In search of the brain's organization for meaning: the N-V double-dissociation and other interesting phenomena"

Analia Arevalo

+ more

In this talk I will present some of the work we have conducted in our search to understand how the brain is organized for meaning, both linguistic and non-linguistic. In one of our studies, we began by exploring the notion of Noun-Verb dissociations, which has often been studied but remains controversial. We tested a group of 21 aphasic patients (Wernicke's, Broca's, and Anomics) along with a group of college-aged and age-matched controls on a word production task which counterbalanced noun and verb stimuli across three popular production modalities: Picture-naming (PN), reading and repetition. Results revealed that PN was the most difficult task across groups, and also the only modality in which any significant Noun-Verb differences were observed (contrary to other similar studies using blocked presentations of Noun-only or Verb-only stimuli). In addition, analyses over items revealed that all groups displayed a Noun advantage (commonly seen for healthy subjects and contrary to the notion of a Verb advantage in certain brain-injured groups). However, analyses over subjects revealed one piece of evidence for a Verb advantage in Wernicke¹s aphasics, who were significantly faster at processing verbs than nouns (again, only in the PN modality). These results led us to search for possible outliers and analyze these patients' performance on an individual basis. I describe three ways in which we conducted this outlier search and discuss our findings as well as ways of applying neuroimaging techniques, such as fMRI and VLSM (Voxel-based Lesion-Symptom Mapping) to this type of data. In addition, I discuss ways in which we have steered away from the Noun-Verb lexical distinction, to deeper, sensorimotor-based distinctions. In particular, we have investigated the notion of manipulability (items that do or do not involve hand imagery) as another way in which these same Noun-Verb stimuli may be categorized. I describe current studies which have focused on this question as well as how we applied it to our own data with aphasics by creating our own objective classification of hand imagery based on our Gesture Norming study with healthy controls. I describe this study as well, along with some preliminary results and its relevance and application to the many questions we pose.


"Brain areas involved in the processing of biological motion revealed by voxel-based lesion-symptom mapping (VLSM) and fMRI"

Ayse Pinar Saygin

+ more

Image sequences constructed from a dozen point-lights attached to the limbs of a human actor can readily be identified as depicting actions. Regions in lateral temporal cortex (most consistently in the vicinity of the superior temporal sulcus, STS), which respond to this kind of motion, have been identified in both human and macaque brains. On the other hand, in the macaque brain “mirror neurons” which fire during both action production and passive action observation have been found in frontal cortex. Subsequent work has revealed that observing others’ actions leads to activations in inferior frontal cortical areas in humans as well. In humans, it appears that this response is relatively left-lateralized and overlaps partially with areas of the brain known to be involved with processing language. This posterior-frontal network is of interest to many cognitive scientists because it helps provide a unifying neural basis for perception, action, and perhaps even language.

Given that point-light biological motion figures depict actions, could their perception also recruit frontal cortex in a similar manner? Or are these stimuli too simplified to drive the neural activity in these frontal action observation areas?

We addressed this question in two studies: The first was a neuropsychological study which tested biological motion perception in left-hemisphere injured patients. Brain areas especially important for biological motion processing were identified using voxel-based lesion symptom mapping (VLSM). The second was an fMRI study on healthy normal controls. We scanned participants as they viewed biological motion animations, "scrambled" biological motion animations (which contain the local motion vectors but not the global form) and static frames from the same animations (baseline condition). Data were analyzed using surface-based techniques including high-resolution surface-based intersubject averaging.

Collaborators: S.M. Wilson, E. Bates, D.J. Hagler Jr., M.I. Sereno


"Using CORPUS Data to Model Ambiguity Resolution and Complementizer Use"

Doug Roland / Jeff Elman / Vic Ferreira

+ more

Structural ambiguities, such as the post-verbal Direct Object/Sentential Complement ambiguity, which occurs in examples such as (1), where the post-verbal NP can either be a direct object (2) or a subject (3), have long been used to study sentence processing.

(1)   The people recalled the governor .

(2)  . and elected a new one (DO).

(3)  . was still in office (SC-0).

However, very important questions remain unanswered: How much information is available to the comprehender as they process a structurally ambiguous sentence and is the information used to resolve this ambiguity specific to these structures or is it more general information associated with subject and object-hood? If the information used to resolve this ambiguity is generic, then sentential complement examples with (4) and without (5) the complementizer that should have similar properties relative to direct object examples. However, some evidence suggest complementizer use is not arbitrary (i.e. Ferreira & Dell, 2000; Hawkins, 2002; Thompson & Mulac, 1991).

(4)  Chris admitted that the students were right (SC-that).

(5)  Chris admitted the students were right (SC-0).

We use the 100 million word British National Corpus to investigate the extent of ambiguity and the amount and specificity of the information available for disambiguation in natural language use (in contrast with the isolated contexts used in psycholinguistic experiments). We prepared a database of the approximately 1.3 million sentences in the BNC that contained any of the 100 DO/SC verbs listed in Garnsey, Pearlmutter, Myers, and Lotocky (1997), and identified all DO, SC-0, and SC-that. These examples were labeled for a variety of formal and semantic properties. The formal properties included the length of the subject and post verbal NPs and their heads, and the logarithm of the lexical frequency of the heads of the subject NP and the post-verbal NP. The semantic properties consisted of automatically ranking the subject and post-verbal NPs and their heads on twenty semantic dimensions based on Latent Semantic Analysis (Deerwester, Dumais, Furnas, Landauer, & Harshman, 1990).

We then performed a series of regression analyses on this data. Our main findings include:

(1) The resolution of 86.5% of DO/SC-0 structural ambiguities can be correctly predicted from contextual information. This suggests sufficient information is nearly always available to determine the correct structural analysis - before reaching the point traditionally considered to be the disambiguation point. Additionally, through the analysis of cases where the model predicts the wrong subcategorization, it allows for the identification and detailed analysis of truly ambiguous or garden-path cases.

(2) The factors used by the model to resolve the DO/SC-0 ambiguity cannot be used to correctly identify pseudo-ambiguous SC-that examples as sentential complements. In fact, SC-that and DO examples have similar properties and form an opposition to SC-0 examples.

(3) The presence/absence of the complementizer that can be correctly predicted by the model in 77.6% of SC-0/SC-that examples, supporting previous evidence that complementizer use is not arbitrary. That is used specifically in cases where the SC has properties that are similar to DO examples.


"Admitting that admitting sense into corpus analyses makes sense"

Mary Hare Bowling Green State University

+ more

Linguistic and psycholinguistic research has documented that there is a close relationship between a verb's meaning and the syntactic structures in which it occurs, and that learners and comprehenders take advantage of this relationship during both acquisition and processing (e.g. Dowty, 1991; Fisher, Gleitman, & Gleitman, 1991; Hare, McRae, & Elman 2003; Jackendoff, 2002).

In the current work we address the implications of these facts for issues in structural ambiguity resolution. Specifically, we argue that comprehenders are sensitive to meaning-structure correlations based not on the verb itself (as recent work on verb bias effects suggest) but on the verb's specific senses, and that they exploit this information during on-line processing.

In a series of corpus analyses, we first look at the overall subcategorization biases of a set of verbs that that allow multiple subcategorization frames. The results of the first analysis demonstrate that individual verbs show significant differences in their subcategorization profiles across corpora. However, many verbs that take both direct object (DO) and sentential complement (SC) subcategorization frames differ in meaning between the two cases. (e.g. admit in the sense 'let in' must occur with a DO, while in the sense 'confess/concede' it may take either frame).

In a second corpus analysis, using a set of verbs taken from recent psycholinguistic experiments, we test the extent to which sense differences of this sort underlie the cross-corpus inconsistency in bias (cf. Roland & Jurafsky, 2002). Individual senses for the set of verbs were taken from WordNet's Semantic Concordance (Miller, Beckwith, Fellbaum, Gross, & Miller, 1993). Corpus examples were annotated for verb sense, and subcategorization biases were then determined for the individual senses, rather than for the verb itself. When bias estimates were calculated at the level of sense, they were much more stable across corpora. This suggests that the correlations between meaning and structure are most reliable at this level, that therefore this is a more likely source of information for comprehenders to exploit.

Finally, we apply the results of these analyses to recent experiments on the use of verb subcategorization bias in ambiguity resolution, and show that the degree of consistency between sense-contingent subcategorization biases and experimenters' classifications largely predicts a set of recent experimental results. We argue from these findings that verb bias reflects comprehenders' awareness of meaning-form correlations, and comprehenders form and exploit these correlations at the level of individual verb senses, rather than the verb in the aggregate.


"Language, music, syntax, and the brain"

Aniruddh D. Patel The Neurosciences Institute

+ more

Language and music afford two instances of rich syntactic structures processed by the human brain. The question of the cognitive and neural relationship of these two syntactic systems is of interest to cognitive science, as it addresses the much-debated issue of modularity in language processing. Recent evidence from neuroscience regarding the relation of linguistic and musical syntax appears paradoxical, with evidence in favor of overlap from neuroimaging, and evidence against overlap from neuropsychology (dissociations). In this talk I use current cognitive theories of linguistic and musical syntactic processing to suggest a resolution to the paradox and to generate specific predictions to guide future research. The need for research on musical processing in aphasia will be discussed.


"The integration of semantic versus world knowledge during on-line sentence comprehension"

Lea Hald

+ more

The current research was aimed at addressing several specific questions regarding the integration of world knowledge during language comprehension. First, what is the time course of the on-line integration of semantic and world knowledge information? Secondly, which are the crucial brain areas involved in these processes?

It is a long-standing issue whether or not semantic information is prepackaged into the mental lexicon and therefore more immediately available than world knowledge that is necessary to assign a truth-value to a sentence. Two ERP studies were performed to investigate this question. Subjects were presented with sentences like the following types (critical words are underlined):

(a) "Amsterdam is a city that is very old and beautiful." (Correct)

(b)"Amsterdam is a city that is very new and beautiful." (World Knowledge Violation)

(c) "Amsterdam is a city that is very thin and beautiful." (Semantic Violation)

Sentence (b) is semantically well-formed, but not true, when considering the founding date of Amsterdam. In contrast, in sentence (c) the semantics of the noun "city" makes the adjective "thin" not applicable. The question was whether or not the waveforms for (b) would result in an N400 effect with the same latency and topography as a lexical semantic N400-effect (c).

The ERP waveforms for both (b) and (c) resulted in a clear and sizable N400 effect, with comparable onset and peak latencies. Additionally, (c), but not (b) resulted in an additional late positivity with a posterior distribution.

To address the second issue: what are the crucial brain areas involved in these processes, a fMRI version of the experiment was performed. Results indicated that both (b) and (c) activated the left inferior frontal gyrus. In addition, (c), but not (a) or (b), resulted in activation of the left posterior parietal region. Post-integration processes may be responsible for this differential activation found for the world knowledge and semantic conditions.

The results of this research indicate that during on-line sentence comprehension world knowledge information is integrated as quickly as lexical semantic information. The left prefrontal cortex might be involved in an aspect of this recruitment/integration process.


"Flexible Induction of Meanings and Means: Contributions of Cognitive and Linguistic Factors"

Gedeon O. Deak Dept of Cognitive Science, UCSD
Gayathri Narasimham Dept of Psychology & Human Development, Vanderbilt University

+ more

The ability to rapidly adapt representational states and responses to unpredictable cues and exigencies is fundamental to language processing. Adaptation depends on flexible induction: selection of different regularities and patterns from a complex stimulus array in response to changing task demands. Flexible cognitive processes are believed to change qualitatively between 2 and 6 years, in conjunction with profound changes in language processing. Until recently, however, there was little data on the extent and source of changes in flexible induction in early childhood. This talk describes evidence of a shift from 3 to 4 years, in typically developing children, to improved ability to adapt to changing task cues. The shift spans verbal tests, flexible induction of word meanings and flexible rule use, as well as a new non-verbal test of flexible induction of "means," or object functions. These results imply that certain higher-order cognitive and social skills contribute to semantic and pragmatic development in early childhood.


Action comprehension in aphasia: Linguistic and non-linguistic deficits and their lesion correlates

Ayse Saygin

+ more

We tested aphasic patients' comprehension of actions with the aim of examining processing deficits in the linguistic and non-linguistic domains and their lesion correlates. 30 left-hemisphere damaged patients and 18 age-matched control subjects matched pictured actions (with the objects missing) and their linguistic equivalents (printed sentences with the object missing) to one of two visually-presented objects. Aphasic patients had deficits in this task not only in the linguistic domain but also in the non-linguistic domain. A subset of the patients, largely consisting of non-fluent aphasics, showed a relatively severe deficit in the linguistic domain compared with the non-linguistic domain, but the reverse pattern of impairment was not observed. Across the group, deficits in the linguistic and non-linguistic domains were not tightly correlated, as opposed to prior findings in a similar experiment in the auditory modality (Saygin et al, 2003). The lesion sites that were identified to be important for processing in the two domains were also independent: While lesions in the inferior frontal gyrus, premotor and motor cortex and a portion of somatosensory cortex were associated with poor performance in pantomime interpretation, lesions around the anterior superior temporal lobe, the anterior insula and the supramarginal gyrus were associated with poor reading comprehension of actions. Thus, linguistic (reading) deficits are associated with regions of the brain known to be involved in language comprehension and speech articulation, whereas action comprehension in the non-linguistic domain seems to be mediated in part by areas of the brain known to subserve motor planning and execution. In summary, brain areas important for the production of language and action are also recruited in their comprehension, suggesting a common role for these regions in language and action networks. These lesion-symptom mapping results lend neuropsychological support to the embodied cognition and 'analysis by synthesis' views of brain organization for action processing.


"Generative grammar and the evolution of language"

Stephen Wilson Neuroscience Interdepartmental Program, University of California, Los Angeles

+ more

The detailed and specific mechanisms presumed by generative linguistic theories to be innate have long been thought by many to be difficult to reconcile with Darwinian evolution. In a recent review article, Hauser, Chomsky & Fitch (2002) have formulated a novel perspective on the problem, proposing a distinction between "broad" and "narrow" conceptions of the faculty of language. They argue that whereas many aspects of the "broad" faculty of language (sensory-motor and conceptual-intentional) have animal homologues and may have evolved by familiar mechanisms, the "narrow" faculty of language--roughly, generativity and recursion--is unique to humans and may constitute an exaptation. It seems to be implied that much of the complexity of grammar is an outcome of interactions between the recursive component and the broader aspects of the language faculty, in a manner strikingly reminiscent of recent emergentist approaches. In this talk I will discuss Hauser et al.'s arguments in detail, showing the continuity between their position and similar but less explicit suggestions made over the last few decades in the generative literature. The increasingly clear role of emergence in explaining grammatical complexity is welcome, but I will argue that "recursion", which plays a somewhat monolithic role in the authors' model, needs to be understood as more of a complex, multi-faceted set of processes.


"The behavior of abstract and concrete words in large text corpora"

Rob Liebscher
David Groppe

+ more

Over the past five decades, psycholinguists have uncovered robust differences between the processing of concrete and abstract words. One of these is the finding that it is easier for people to generate possible contexts for concrete words than for abstract words; that is, concrete words seem to have higher "context availability" (CA) than abstract words. Some have argued that this difference is the root cause of other basic processing differences between concrete and abstract words. For example, concrete words are typically easier to identify, read, and remember than abstract words.

While the greater context availability of concrete words is well established, it is not clear why this is so. Schwanenflugel (1991) hypothesized that abstract words may be used in a greater variety of semantic contexts than concrete words, and are therefore less likely to be part of a "prototypical" context that is easy to generate. This hypothesis is difficult to test with psycholinguistic methods, but is readily testable with corpus analysis techniques.

Audet and Burgess (1999) tested CA by measuring the "context density" (the percentage of non-zero elements in a word's co-occurrence vector) of concrete and abstract words in Usenet corpora. They found that the set of abstract words had a higher context density than concrete words, and argued that this confirmed Schwanenflugel's speculation.

We re-evaluate the results of Audet and Burgess (1999) using similar corpora and a carefully chosen subset of their words, controlled for high concreteness and high abstractness ratings. In addition to the set of words used by Audet and Burgess, we use another set of words that were rated as being more prototypically concrete or abstract. We show that context density is seriously confounded with word frequency (r =0.96) in both sets and that differences in context density disappear when frequency is controlled for.

We then demonstrate that a more appropriate measure of contextual constraint is the entropy of a word's co-occurrence vectors. Entropy reflects not only the presence but also the strength of association between a word and a context, and is not correlated with frequency. Entropy indicates that, in both sets, concrete words appear in more variable contexts than abstract words. This result runs counter to Schwanenflugel's hypothesis and suggests a rethinking of the psychological basis of context availability.


"The relationship of eye gaze and agreement morphology in ASL:
An eye-tracking study"

Robin Thompson
Karen Emmorey
Robert Kluender

+ more

The licensing of agreement is a crucial feature of current syntactic theory, and as such it should be found in signed as well as spoken languages. Standard analyses of American Sign Language (ASL) propose three verb classes: agreeing verbs (e.g., BLAME), spatial verbs (e.g., GO), and plain verbs (e.g., LOVE) (Padden, 1983). Verbs are distinguished by the type of agreement morphology they occur with. Agreeing verbs are directed toward locations in signing space indicating subject/object arguments of the verb, spatial verbs toward locatives, and plain verbs do not occur with agreement morphology. However, Neidle et al. (NKMBL, 2000) claim that all verbs in ASL are agreeing, with only the manner in which agreement is marked differing across verb types. On this view, verbs can be marked with either manual agreement (verb directed toward locations associated with the subject/object), nonmanual agreement (eye gaze toward the object/head-tilt toward the subject), or through the use of an overt pronoun/nominal. While manual agreement is overtly morphological, nonmanual markings are claimed to be manifestations of abstract phi-features. If eye gaze is a possible marker of object agreement and if all verbs are underlyingly agreeing, then one would expect gaze toward object locations equally for agreeing, spatial and plain verbs (or with higher frequency for plain verbs since they lack manual marking). The NKMBL analysis also predicts eye gaze accompanying intransitive verbs towards the subject or the addressee (the default direction). Finally, plain verbs with null object pronouns must be marked with eye gaze, the only available feature-checker in this case. To test these predictions, we conducted a language production experiment using head-mounted eye-tracking to directly measure eye gaze.

Methods: Using the eye-tracker, 10 Deaf native signers, (1) told another Deaf native signer a story designed to elicit spatial and agreeing verbs, and (2) made up sentences using specified verbs. Results: Consistent with NKMBL's claims, eye gaze accompanying agreeing verbs was most frequently directed toward the location of the syntactic object (70.1%) and otherwise toward a location on or near the addressee's face. However, eye gaze accompanying spatial verbs was toward a locative (63.2%) rather than the object of transitive verbs/subject of intransitive verbs as predicted. Eyegaze accompanying plain verbs was seldom directed toward the object (11.8%), inconsistent with NKMBL's claims; gaze for these verbs was generally toward the addressee's face (45.3%) or toward 'other', a location other than subject, object, or addressee (38.6%). Also, unlike agreeing verbs, plain verbs were never produced with null object pronouns. These results argue against NKMBL's claim that all verbs are agreeing, since eye gaze accompanying plain verbs does not mark the syntactic object. Additionally, while the results do support an analysis of eye gaze as marking agreement for agreeing and spatial verbs, agreement is not uniformly with the syntactic object as claimed by NKMBL. Thus, we propose an alternative analysis of eye gaze agreement as marking the 'lowest' available argument (Subject> Direct Object>Indirect Object >Locative) of an agreeing or spatial verb.


"Signposts along the garden path"

Doug Roland
Jeff Elman
Vic Ferreira

+ more

Structural ambiguities such as the post-verbal Direct Object/Sentential Complement ambiguity have long been used to study sentence processing. However, a very important question remains unanswered: How much information a does a comprehender have available as they process a structurally ambiguous sentence? We use a large corpus (the British National Corpus) to investigate the actual extent of ambiguity and how much information is available for disambiguation. The use of corpus data allows us to investigate the relative frequency and strength of disambiguating information found during natural sentence comprehension rather than that used during the comprehension the isolated contexts used in psycholinguistic experiments.

We prepared a database of approximately 1.3 million examples of the 100 DO/SC verbs listed in Garnsey, Pearlmutter, Myers, & Lotocky (1997). Of these, approximately 248,000 examples were structurally ambiguous (specifically, DO or ?that?-less sentential complement). These examples were labeled for a variety of formal and semantic properties. The formal properties included the length of the subject and post verbal NPs and their heads, and the log of the lexical frequency of the heads of the subject NP and the post-verbal NP. The semantic properties consisted of automatically ranking the subject and post-verbal NP heads on five semantic dimensions based on performing Principle Component Analysis on the WordNet Hypernyms of the heads of these NPs.

We then performed a regression analysis on this data. This analysis produced a model that was able to predict the subcategorization correctly in nearly 90% of the structurally ambiguous cases. Because of the simplicity of factors included in the model, we feel that this represents a lower bound on the amount of information available. In fact, many of the cases where the model mis-predicted the subcategorization of the example could have been correctly resolved if pronoun case had been taken into account.

Not only does this model allow for the identification of factors which can be used to resolve ambiguity, it allows for the identification and detailed analysis of truly ambiguous or garden-path cases, through the analysis of cases where the model mis- predicts subcategorization.


"Neural resources for processing language and environmental sounds:
Lesion-symptom mapping of patients with left hemisphere damage and fMRI with normal controls"

Ayse Pinar Saygin & Frederic Dick

+ more

Environmental sounds share quite a few perceptual and informational features with language, thus making them useful in exploring possible links between verbal and nonverbal auditory processing. However, neural resources for processing environmental sounds, especially the degree to which these overlap with neural systems for processing language are not completely understood. To examine the relationships between environmental sound and speech processing, we used two complementary methods: behavioral and lesion analyses in patients with brain damage and fMRI with normal controls. We used a 2-alternative forced-choice design where the task was to match environmental sounds and linguistic phrases to corresponding pictures. The verbal and nonverbal task components were carefully matched through a norming study.

In Experiment 1, 30 left hemisphere damaged, 5 right hemisphere damaged patients and 19 age-matched controls were tested behaviorally and patients' impairments in the verbal and nonverbal domains were examined. Lesion mapping was conducted using both traditional overlays as well as voxel-based lesion-symptom mapping (VLSM), an analysis method and software developed by our group.

In Experiment 2, 12 participants were scanned in a 1.5 T clinical scanner using a 'sparse sampling' paradigm that minimized the effect of the acoustical noise produced by the gradient coils . Group data were analyzed in order to look for regions active during processing of environmental sounds or speech. In order to provide additional statistical power, ROI analyses were conducted using regions drawn on individual subjects' cortical surfaces. Cross-correlations of each condition's positive and negative activations (relative to the complex baseline task) were performed in order to assess whether distributed coding of domain could be observed in these ROIs.

One of our more general goals is to integrate the two methods of brain mapping that we used in this project: Lesion mapping and functional neuroimaging. Here we will present some analyses in which the lesion maps obtained via VLSM in Experiment 1 are used as masks for the fMRI data collected in Experiment 2. Therefore we will not only examine the neural correlates of environmental sound and language processsing with further precision, but also we will show how the two brain mapping methods can be used in conjunction to explore issues of interest in cognitive neuroscience.


"Can speakers avoid linguistic ambiguities before they produce them?"

Victor S. Ferreira, L. Robert Slevc, and Erin S. Rogers

+ more

An expression is linguistically ambiguous when it can be interpreted in more than one semantically distinct way. Because such ambiguities potentially pose a fundamental challenge to linguistic communication, it is often assumed that speakers produce utterances that specifically avoid them. In four experiments, we had speakers describe displays that sometimes contained linguistic ambiguities by including two pictures with homophonic labels (e.g., a smoking pipe and a plumbing pipe). The experiments showed that linguistic ambiguities can be especially difficult to avoid, and that the difficulty comes from the fact that speakers are evidently unable to look ahead past the currently formulated linguistic expression to recognize that it can be interpreted more than one way, even when the alternative interpretation is itself just about to be linguistically encoded and articulated. On the other hand, speakers do avoid linguistic ambiguities when the alternative interpretation has already been described with the potentially ambiguous label. The results suggest that speakers can avoid linguistic ambiguities only by looking backwards at utterances they've already produced, not by looking forward at utterances they might be about to produce.


"Computational Limits on Natural Language Suppletion"

Jeremy Boyd Department of Linguistics, UC San Diego

+ more

While most natural languages tend to contain suppletive pairs, suppletion is vastly overshadowed in all languages by regular form-to-form mappings. What enforces the cross-linguistically low level of suppletion? This work makes the intuitive argument that suppletive mappings are kept to a minimum for a very simple reason: they are harder to learn. In what follows, this point is illustrated through an examination of suppletive and regular (uniform) verbal paradigms.

Most contemporary theories of morphology offer no way to constrain the amount of suppletion that occurs in a language. In inferential-realizational theories (Stump, 2001) for instance, a verbal paradigm can be realized using either rules that enforce uniformity, or rules that allow suppletion, as in the following examples from Spanish [see Table in PDF].

Repetition of the root habl- in each cell of the paradigm for hablar gives rise to its uniform nature. In contrast, no identifiable root exists to anchor the forms that make up ser?s paradigm. As a result, the relationship between any two members of the paradigm is suppletive. The problem here is that theories that make use of these kinds of rules offer no reason to favor the class of rules that realizes a paradigm uniformly over the class that realizes a paradigm suppletively. This lack of constraint erroneously predicts that a language could contain more suppletive than uniform paradigms, or even be composed solely of suppletive paradigms.

The fact that the grammar does not provide a way to limit suppletion is not problematic, however, if we adopt the position that grammars are embedded within a biological system that has limited computational resources. In order to demonstrate the validity of such an approach, I devised a set of 11 ?languages,? each containing a different number of suppletive verbal paradigms, ranging from no suppletion, to a language in which all paradigms are suppletive. These languages were then presented to a neural network designed with a standard feedforward architecture, and running the backpropagation-of error learning algorithm (Rumelhart & McClelland, 1986). The results show that, as the number of suppletive paradigms the network is asked to master increases, learnability decreases: [see Table in PDF].

Further, there is an upward limit, or threshold, on the number of suppletive paradigms that can be learned without significantly affecting network performance. In effect, the model predicts that suppletion in natural language will be tolerated, so long as it is kept to a minimum.

Although this work focuses on the way in which performance limitations can supplement inferential-realizational theories of morphology to provide constraints on suppletion, it can be applied to other morphological theories as well, most of which (if not all) also fail to put limits on whatever mechanism they use to account for suppletion.


"Empirically-oriented comparative studies of rhythm and melody in language and music"

Aniruddh D. Patel Associate Fellow, The Neurosciences Institute

+ more

In this talk I address the following questions: Does the rhythm of a composer's native language have an influence on the rhythm of his/her music? Does the processing of speech intonation have any neural relationship to the processing of musical melody? The larger issue addressed by these studies is the extent to which linguistic patterns and processes are shared with vs. sealed off from other aspects of cognition and perception.


"Perturbation & adaption during language comprehension:
results from behavioral and fMRI studies"

Amy Ramage San Diego State University

+ more

The current investigation examined perturbation and adaptation during language comprehension in young normal subjects. Induced instability was studied by increasing perceptual demand (compressed sentences), syntactic demand, or both. Two experiments were conducted, one behavioral and one using fMRI technology, to explore the relations between brain responses and behavior. This presentation addresses if changes in rate of speech, syntax, or both induced an instability, or perturbation, and explores subsequent adaptation to increased instability. The results suggested that subjects develop and maintain a representation of either the syntactic frame (i.e., via a process like priming), a conscious strategy for accommodating syntactic complexity, or a rate normalization schema. The second experiment used fMRI to measure brain activation associated with perturbation and adaptation of language and showed regions active during increased demand and/or during adaptation. Those brain regions that remained active during adaptation may have been used to maintain the linguistic or perceptual frame.


"Pre-attentive auditory processing of lexicality"

Thomas Jacobsen Kognitive & Biologische Psychologie
University of Leipzig

+ more

Which aspects of speech do we comprehend even while we are ignoring the input? Are isolated words processed pre-attentively? Lexicality and change detection based on auditory sensory memory representations were investigated by presenting repetitive auditory words and pseudo-words under ignore conditions in oddball blocks. In a cross-linguistic study, sound items that are words in Hungarian and pseudo-words in German and items with reverse characteristics were used. A fully crossed 2x2 design of word and pseudo-word deviants and standards was implemented. Deviant words and pseudo-words elicited the Mismatch Negativity component of the event-related brain potential. The standards? lexicality hypotheses was confirmed which holds that lexical standards lead to different default processes than non-lexical pseudo-word standards, regardless of the lexicality of the deviant. In both language groups the Mismatch Negativity was larger with word standards than pseudo-word standards, irrespective of the deviant type. It is suggested that an additional process is triggered during deviancy detection by a pre-attentive tuning in to word standards. Furthermore, in both groups the ERPs elicited by word standards were different from ERPs elicited by pseudo-word standards starting around 220 ms after the uniqueness point. This also demonstrates that the lexicality of the context affects the processing of the auditory input.


Talk in the here and now

Herbert H. Clark

+ more

As people talk, they anchor what they say to the here and how-to their current common ground. Indeed, anchoring is an essential part of their communication. They do this by communicative acts of indicating. They indicate themselves as speaker and addressee; they indicate the objects and events they refer to; and they indicate certain times and places. The issue is how they manage that. In this talk I take up how people indicate things in joint activities such as building models, furnishing a house, and gossiping. The evidence I use comes from video- and audiotapes. In the end I will argue that much of what is considered background or context-and therefore non-communicative-is really made up of communicative acts of indicating.


"An accounting of accounts: Pragmatic deficits in explanations by right
hemisphere-damaged patients"

Andrew Stringfellow

+ more

Right hemisphere brain damage (RHD) has typically been characterized as producing deficits in visuospatial abilities, attention deficits, and/or deficits in the processing of emotion. Over recent years, more attention has been paid to the abnormal verbal abilities that may present following RHD. These abnormalities are typically characterized as involving "non-literal" language; while some of the problems no doubt arise from the lower-level deficits above, others are putatively associated with a deficiency in theory of mind specifically, and social cognition more generally. The results of two studies are presented; these studies attempt to characterize the discourse styles of RHD patients in the production of requests for assistance and explanations for/accounts of transgressive behavior. An attempt will be made to situate these results within existing accounts of RH (dys-)function.


"On the processing of Japanese Wh-Questions: An ERP Study"

Mieko Ueno

+ more

Using event-related brain potentials (ERPs), I investigated the processing of Japanese wh-questions, i.e., questions including wh-words such as 'what' and 'who'. Previous ERP studies on the processing of wh-questions in English and German have reported effects of left anterior negativity (LAN) between a displaced wh-word (filler) and its canonical position (gap). These have been argued to indicate verbal working memory load (Kluender & Kutas, 1993; Fiebach, et al. 2001). Unlike English or German wh-words, Japanese wh-words typically are not displaced, but remain in canonical Subject-Object-Verb word order (so-called wh-in-situ). Additionally, Japanese wh-words are associated with a question particle that by its clausal placement indicates what part of the sentence is being questioned (Nishigauchi, 1990; Chen, 1991), e.g., 'Did you say what he brought?' (embedded clause scope) and 'What did you say he brought?' (main clause scope). Both a self-paced reading-time study (Miyamoto & Takahashi, 2001) and an ERP study (Nakagome et al., 2001) suggest that the parser expects a question particle following a Japanese wh-element. Given the above, I tested the extent to which the neural processing of Japanese wh-questions shows similarities to the processing of English or German wh-questions.
In experiment 1, stimuli were mono-clausal wh- and yes-no-questions with the object NP (wh or demonstratives) in situ (1a) and displaced (1b). In experiment 2, stimuli were bi-clausal wh-questions (with embedded and main clause scope wh) and their structurally equivalent yes/no-question counterparts. For each experiment, a group of 20 native speakers of Japanese was used, and sentences were presented visually one word at a time.
Bi-clausal main clause scope wh-questions (2b) elicited greater anterior negativity between wh-words and corresponding question particles. This was similar to ERP effects seen between fillers and gaps in English and German, and suggests similar mechanisms for processing wh-related dependencies across syntactically distinct languages. In addition, both mono-clausal ((1a) and (1b)) and bi-clausal ((2a) and (2b))wh-questions elicited greater right-lateralized (mostly anterior) negativity at sentence end. This effect can most conservatively be interpreted as an end-of-sentence wrap-up effect. However, since such effects have consistently been reported as right posterior negativities, the possibility exists that the effect indexes a processing effect specific to a wh-in-situ language like Japanese. One possible account is the effect of the integration of sentential wh-scope.
(1) Mono-clausal stimuli

a. Ano jimotono shinbun-ni yoruto
the local -----newspaper-to according

------sono yukanna bokenka-ga toto [nani-o/sore-o] mitsuketandesu-ka.

------the brave adventurer-N finally [what-A/that-A] discovered-Q

'According to the local newspaper, did the brave adventurer finally discover what/that?'

b. Ano jimotono shinbun-ni yoruto
the local----- newspaper-to according

------[nani-o/sore-o] sono yukanna bokenka-ga toto mitsuketandesu-ka.
------[what-A/that-A] the brave adventurer-N finally discovered-Q

'According to the local newspaper, did the brave adventurer finally discover what/that?'

(2) Bi-clausal stimuli

[senmu-ga -------donna/atarashii -------pasokon-o------- katta-KA/TO]
director-N------ what.kind.of/new PC-A---------- bought-Q/that

keirika-no ------kakaricho-ga --__ --------kiki/ii-mashi-ta-ka.
accounting.sec-of ----manager-N ---------------ask/say-POL-PAST-Q

a. 'Did the manager of the accounting section ask what kind of computer the director bought?'

b. 'What kind of computer did the manager of the accounting section say the director bought?'

c. 'Did the manager of the accounting section ask whether the director bought new computer?'

d. 'Did the manager of the accounting section say that the director bought a new computer?'


"Verbal working memory and language development"

Katherine Roe

+ more

I will be presenting some (or all) of my dissertation studies, which were designed to assess the relationship between verbal working memory and language development? One series of studies investigated whether children's proficiency at complex sentence comprehension was related to their verbal working memory development. The other series of experiments hoped to determine whether sensitivity to contextual cues embedded within a sentence is working memory dependent in adults and or children.


"'That' as syntactic pause: Retrieval difficulty
effects on syntactic production"

Vic Ferreira

+ more

In certain common sentences, a speaker can use or omit optional words, such as the "that" in a sentence complement structure like "The poet recognized (that) the writer was boring." What is the communicative value of the mention or omission of such optional words? Two independent research threads converge to suggest an intriguing possibility: First, research on disfluent production suggests that speakers use filled pauses like "uh" and "um" specifically to communicate upcoming retrieval difficulties (Clark & Fox Tree, 2002), implying that "upcoming difficulty" is communicatively useful information. Second, research on sentence-complement production shows that speakers are more likely to omit the "that" when post-"that" material is easily retrieved from memory (Ferreira & Dell, 2000; note that similar effects have been revealed with other alternations, e.g., Bock, 1986). What has not been shown is that speakers mention "thats" more specifically when subsequent material is more difficult to retrieve; if so, then the communicative value of the "that," like a filled pause, might be to indicate upcoming retrieval difficulty.

To test this, we exploited the fact that speakers have difficulty retrieving words that are similar in meaning to other words that they have just expressed (e.g., Vigliocco et al., in press). Speakers produced sentence-complement structures in which the post-"that" material -- the embedded-subjects -- were either meaning-similar (and therefore more difficult to retrieve) or meaning-dissimilar (and therefore easier to retrieve) to three nouns in the main subjects (e.g., "The AUTHOR, the POET, and the BIOGRAPHER recognized (that) the WRITER was boring" vs. "The AUTHOR, the POET, and the BIOGRAPHER recognized (that) the GOLFER was boring."). (A separate experiment independently verified this effect of similarity on retrieval difficulty). Production was elicited with a sentence-recall procedure, where speakers read and produced sentences back after a short delay (which results in relatively free production of the "that"). The results confirmed the prediction: Speakers produced significantly more "thats" before more difficult-to-retrieve meaning-similar embedded subjects than before more easily retrieved meaning-dissimilar embedded subjects. Furthermore, meaning-similar embedded subjects were _also_ accompanied by more disfluencies, and "that"-mention and disfluency rate were significantly correlated. Thus, speakers mention "thats" more often when subsequent sentence material is more difficult to retrieve, suggesting that speakers may use "thats" (and possibly other choices of sentence form as well) to indicate such upcoming retrieval difficulties.


"Verb Sense and Verb Subcategorization Probabilities"

Doug Roland

+ more

Verbs can occur in a variety of syntactic structures. For example, the verb 'fight' can be used with only the subject (he fought), with a prepositional phrase (He fought for his own liberty), or with an NP direct object (He fought the panic of vertigo). The set of probabilities describing how likely a verb is to appear in each of its possible syntactic structures is sometimes referred to as the subcategorization probabilities for that verb. Verb subcategorization probabilities play an important role in both psycholinguistic models of human sentence processing and in NLP applications such as statistical parsing. However, these probabilities vary, sometimes greatly, between sources such as various corpora and psycholinguistic norming studies. These differences pose a variety of problems. For psycholinguistics, these problems include the practical problem of which frequencies to use for norming psychological experiments, as well as the more theoretical issue of which frequencies are represented in the mental lexicon and how those frequencies are learned. In computational linguistics, these problems include the decreases in the accuracy of probabilistic applications such as parsers when they are used on corpora other than the one on which they were trained. I will propose two main causes of the subcategorization probability differences. On one hand, differences in discourse type (written text, spoken language, norming experiment protocols, etc.) constrain how verbs are used in these different circumstances, which in turn affects the observed subcategorization probabilities. On the other hand, the types of semantic contexts that occur in the different corpora affect which senses of the verbs are used. Because these different senses of the verbs have different possible subcategorizations, the observed subcategorization probabilities also differ.

This suggests that verb subcategorization probabilities should be based on individual senses of verbs rather than the whole verb lexeme, and that "test tube" sentences are not the same as "wild" sentences. Hence, the influences of experimental design on verb subcategorization probabilities should be given careful consideration.


'Voxel-based Lesion-symptom Mapping'

Elizabeth Bates

+ more

Lesion studies are the oldest method in cognitive neuroscience, with references to the effects of brain injury on speech going back as far as the Edmund Smith Surgical Papyrus more than 3000 years ago. Functional brain imaging is the newest method in cognitive neuroscience; the first papers applying positron emission tomography (PET) to language activation appeared in the 1980's, and the first functional magnetic resonance imaging (fMRI) studies of language appeared in the last decade. Although there are good reasons to expect convergence between lesion and imaging techniques, their underlying logic differs in important ways. Hence any differences in brain-behavior mapping that we can detect through comparison of these two methods may be just as valuable as the anticipated similarities, if not more valuable. To conduct such comparisons, we need a format in which similarities and differences between lesion studies of patients and imaging studies of normal individuals can be compared in detail, going beyond qualitative comparisons (e.g. Brodmann's Area 44 is implicated in both lesion studies and imaging studies of speech production), toward a finer-grained quantitative assessment of the degree to which a given region contributes to normal and abnormal performance on a given task.

In this talk, I will survey results that our group (including Stephen Wilson, Ayse Saygin, Fred Dick, Marty Sereno, Bob Knight and Nina Dronkers) has obtained this summer at CRL, with a new method that we have baptized Voxel-based Lesion-Symptom Mapping (VLSM). VLSM takes the same graphic and analytic formats used to quantify activations in fMRI, and applies them to the relationship between lesion sites (at the voxel level) and continuously varying behavioral scores. In our first illustrations of this method, we compare the relationship between behavioral performance and lesion sites for several subscales of a standard aphasia battery (the Western Aphasia Battery), with a particular emphasis on fluency vs. comprehension (the primary measures to distinguish between fluent and non-fluent aphasias). VLSM maps are constructed using behavioral and structural imaging data for 97 left-hemisphere damaged patients with aphasia, whose lesions have been reconstructed in a standard stereotactic space. You will see at a glance how behavioral deficits "light up" in stereotactic space, expressed as continuously varying z-scores within each voxel for patients with and patients without lesions in that voxel, and as continuously varying statistics within each voxel that represent differences in performance between patients with and patients without lesions in that particular piece of neural tissue. The striking differences displayed for speech fluency vs. auditory comprehension are consistent with 140 years of research in aphasia. However, this is the first time that these well-known lesion-symptom relationships have been mapped using continuous behavioral scores, permitting direct visual inspection of the degree to which a region contributes to behavioral deficits.

We will also show how VLSM maps can be compared across tasks, quantifying degree of similarity (using correlational statistics) and identifying the regions responsible for various degrees of association and dissociation between (for example) fluency and comprehension. This approach to inter-map correlation is useful not only for the exploration of similarities and differences in lesion-symptom mapping across behavioral domains, but also for direct comparisons of VLSM maps of behavior with fMRI or PET maps of activation in the same (or different) behavioral measures in normal subjects. Results of VLSM studies can also be used to identify "regions of interest" for fMRI studies of normal individuals. Conversely, results of fMRI studies can be used to establish regions of interest (with lower significance thresholds) for lesion-symptom mapping using VLSM. The examples we have given here are all based on language measures. Indeed, our preliminary efforts indicate that each of the subscales of the Western Aphasia Battery (e.g. repetition, naming, reading, writing, praxis) yields its own distinct VLSM map, with various degrees of inter-map correlation. However, the method is certainly not restricted to behavioral studies of language in aphasic patients; it could be used for any behavioral domain of interest in cognitive neuropsychology. Furthermore, although VLSM requires information from groups of patients, preliminary results from our laboratories indicate that it can yield reliable results with smaller groups of patients that we have employed here -- as few as 15-20 patients, depending on the robustness of the behavioral measure and its neural correlates. It should also be possible to evaluate the lesion-symptom relationships uncovered for a single patient, by comparing the lesion location and the observed behavioral results for that patient on a given tasks or set of tasks with the lesion profile that we would predict based on VLSM maps of behavioral results for larger groups. Goodness-of-fit statistics can then be used to evaluate the extent to which an individual case conforms to or deviates from group profiles. Finally, the use of VLSM is not restricted to patients with focal brain injury (children or adults). In a pioneering series of studies by Metter, Cummings and colleagues, continuous resting-state metabolic scores were obtained for groups of aphasic patients using positron emission tomography. Continuous metabolic scores in several specific regions of interest were correlated with continuous behavioral scores for the same patients, uncovering regions of hypo-metabolism that were associated with behavioral deficits. The same approach can be taken on a voxel-by-voxel basis with VLSM, correlating continuous behavioral metrics with continuous rather than discrete lesion information. In principle, the latter may include resting-state and/or task-related metabolic scores in PET, perfusion scores on fMRI, perhaps even diffusion-tensor imaging information on regions of white matter, and/or zones of atrophy in patients with dementia. The limits of this method are currently unknown, although all applications will (in contrast with whole-head imaging studies of normals) be limited by the nature, origins and extent of the disease process that results in damaged tissue.


"BIMOLA: A localist connectionist model of bilingual spoken word recognition"

Nicolas Lewy

+ more

Over the last few years, various psycholinguistic studies of bilingualism have been concerned with representational issues, such as the internal organization of the bilingual's lexicon, while fewer have examined the processes which underlie bilingual language perception. In addition, written language has been explored more than speech despite the fact that bilinguals spend more time speaking than they do writing and that, when speaking, they have to process both monolingual utterances in their two (or more) languages and mixed utterances that contain code-switches and borrowings. Based on experimental research investigating how bilinguals recognize these "guest words", we have developed BIMOLA (Bilingual Model of Lexical Access), a localist connectionist model of bilingual spoken word recognition. As inspired by McClelland and Elman's TRACE, which focuses on monolingual spoken word recognition, BIMOLA consists of three levels of nodes (features, phonemes and words), and it is characterized by various excitatory and inhibitory links within and between levels. Among its particularities, we find shared phonetic features for the two languages (in this case, English and French), parallel and independent language processing at the higher levels, and the absence of cross-language inhibition. We also note that language decisions emerge from the word recognition process as a by-product (e.g. having processed a word, BIMOLA can tell whether it was an English or a French word). The model we propose can account for a number of well established monolingual effects as well as specific bilingual findings. This talk, prepared in cooperation with Francois Grosjean, will also include a computer demonstration. Using a specially designed user interface, and time permitting, we will run various simulations on-line, display their results graphically and show some of the BIMOLA's components (lexicons, language mode, parameters, etc.).


"How Chipmunks, Cherries, Chisels, Cheese, and Clarinets are Structured,
Computed, and Impaired in the Mind and Brain"

Ken McRae & George S. Cree

+ more

A number of theories have been proposed to explain how concrete nouns are structured and computed in the mind and brain, and selectively impaired in cases of category-specific semantic deficits. The efficacy of these theories depends on obtaining valid quantitative estimates of the relevant factors. I describe analyses of semantic feature production norms for 206 living and 343 nonliving things covering 36 categories, focusing on seven behavioral trends concerning the categories that tend to be relatively impaired/spared together. The central hypothesis is that given the multiple sources of variation in patient testing, multiple probabilistic factors must converge for these trends to obtain. I show that they can be explained by: knowledge type (using a 9-way cortically-inspired feature taxonomy), distinguishing features, feature distinctiveness, cue validity, semantic similarity, visual complexity, concept familiarity, and word frequency.


Maximizing Processing in an SOV Language: A Corpus Study of Japanese
and English

Mieko Ueno & Maria Polinsky

+ more

A number of parser models (e. g. , Pritchett 1992; Babyonyshev and Gibson 1999) are based on the idea that syntactic attachment happens at the verbal head, which gives the parser information about semantic roles and grammatical relations of argument noun phrases. Such models predict that S(ubject)-O(bject)-V(erb) languages are harder to process than SVO languages, since the parser would have to hold both S and O until it hits V, as opposed to only holding S in SVO. However, since there is no attested difference in reaction times of SOV and SVO languages for on-line processing, we hypothesize that SOV languages have strategies to compensate for the late appearance of the verb. In particular, they may differ from SVO languages in having fewer sentences with two-place predicates where both verbal arguments are expressed.

To test this hypothesis, we conducted a comparative corpus study of English (SVO) and Japanese (SOV). For both languages, root clauses (N=800) were examined with respect to the frequency of one-place (SV: intransitives) vs. two-place (SOV for Japanese, SVO for English: transitives) predicate structures and the overt expression of all arguments. Four different genres were examined in both languages: home decoration magazines, mystery novels, books about Japanese politics, and children's utterances (from CHILDES). Japanese exhibits a significantly greater use of one-place predicates than English (for example, 62. 9% compared to the English 36. 5% in mystery novels; p < . 001 in all genres except books about Japanese politics). In addition, with two-place predicates, Japanese uses null pronouns (pro-drop), thus reducing the number of overt argument noun phrases. The use of pro-drop with one-place predicates in Japanese is significantly lower than with two-place predicates (p < . 05, in all genres except mystery novels). The differences are particularly apparent in child language, where Japanese speaking children around 3;8 had 21% transitives with 100% pro-drop and English speaking children of the same age had 71% transitives with only 33% pro-drop. A preliminary comparison with a pro-drop SVO language (Spanish, based on Bentivoglio 1992) indicates that the distribution of pro-drop across intransitive and transitive clauses is much more even.

These results suggest that there is an extra cost associated with the processing of transitive clauses in a verb-final language. To minimize that cost, Japanese uses a significantly lower percentage of full SOV structures. Thus, processing strategies in SVO and SOV languages differ in a principled manner.


Understanding the functional neural development of language production and
comprehension: a first step using fMRI.

Cristina Saccuman (remotely from Milan) and Fred Dick

+ more

This study - a joint effort in the Center for Cognitive and Neural Development - is truly a developmental 'fact-finding' mission, in that we know relatively little about the neural substrates of language processing in normally-developing children. Here, we examine the BOLD response in a group of older children (10-12 yrs) and young adults (18-30) who performed our workhorse picture naming and sentence interpretation tasks in the magnet. I'll present the results of our initial analyses, and will also discuss some of the difficulties inherent in conducting and interpreting developmental fMRI experiments.


Neural systems supporting British Sign Language processing

Mairead MacSweeney

+ more

Exploring the neural systems that support processing of a signed language can address a number of important questions in neuroscience. In this talk fMRI studies of British Sign Language (BSL) processing will be presented and the following issues addressed: What are the similarities and differences in the neural systems that underlie BSL and audio-visual English processing in native users of the language (deaf native signers V native hearing English speakers)? What is the impact of congenital deafness of the functioning of auditory cortex - is there evidence for cross-modal plasiticity? Does the extent to which sign space is used to represent detailed spatial relationships alter the neural systems involved in signed language processing?


Teaching Children with Autism to Imitate Using a Naturalistic Treatment Approach: Effects on Imitation, Social, and Language Behaviors

Brooke Ingersoll & Laura Schreibman UCSD
Brooke Ingersoll

+ more

Children with autism exhibit deficits in imitation skills both in structured settings and in more natural contexts such as play with others. These deficits are a barrier to acquisition of new behaviors as well as socialization and communication, and are thus an important focus of of research indicates that naturalistic behavioral treatments are very effective at teaching a variety of behaviors in children with autism and mental retardation. Variations of these techniques have been used to teach language, play, social, and joint attention skills; however, as of yet, they have not yet been used to teach imitation skills. We used a single-subject, multiple baseline design across three young children with autism to assess the benefit of a newly designed naturalistic imitation training technique for young children with autism. Participants were observed for changes in imitative behavior as well as other closely related social-communicative behaviors (language and joint attention). Results suggest that this intervention can successfully increase imitative behaviors in young children with autism and also has a facilitative effect on language and joint attention.


"Hierarchical organisation in spoken
language comprehension: evidence from functional imaging"

Matt Davis

+ more

Models of speech comprehension postulate multiple stages of processing although the neural bases of these stages are uncertain. We used fMRI to explore the brain regions engaged when participants listened to distorted spoken sentences. We applied varying degrees of three forms of distortion, and correlated BOLD signal with the intelligibility of sentences to highlight the systems involved in comprehension. By contrasting different forms of distortion we can distinguish between early (acoustic/phonetic) and late (lexical/semantic) stages of the comprehension process. The increased demands of comprehending distorted speech (compared to clear speech) appears to modulate processes at both of these levels.


"Complex Morphological Systems and Language Acquisition"

Heike Behrens

+ more

The German plural focused prominently in the Dual Mechanism Model of inflection: Out of the eight plural markers, the -s plural is particular because it is low frequent and at the same time largely unconstrained in terms of the morphonological properties of the noun root it combines with. In the Dual Mechanism Model it was hypothesized that the -s plural serves as the default affix: Supposedly, irregular forms are stores holistically, and errors occur when lookup in memory fails. I will address the predictions of this model for language acquisition with a particularly detailed case study (12000 plural forms): Error patterns show that some highly predictable plurals are acquired without errors, whereas other sets of nouns with low predictability of the plural marker show errors rates of up to 40%. Hence, plural errors are not due to random or frequency-based "retrieval failure", but indicate ambiguities in the plural system. Second, the distributional properties of the -s plural are acquired in a piecemeal fashion by generalization over the subsets of nouns it applies to: -s errors occur only in morphonological domains where the -s plural is attested. In sum, neither plural errors nor the acquisition of the -s plural suggest that a second, default mechanism is at work.


Phonological awareness in children in and out of school

Katie Alcock

+ more

Phonological awareness is a composite skill including awareness of words, phonemes, and phonological similarities, and the ability to break down words into component parts. Skill in phonological awareness tasks predicts future or concurrent reading skill; however some phonological awareness tasks are not possible for preschool children or illiterate adults. This study aim is to investigate the direction of causality by studying children who cannot read through lack of opportunity rather than lack of aptitude.

The study aimed to investigate the impact of age and schooling on phonological awareness in an age group that in Western settings would already be at school. A two by four (attending and never attended school groups, with four age groups in each schooling group) design was employed.

Matched groups of Tanzanian children aged 7 to 10 years with no schooling or in first or second grade performed reading tests and phonological awareness tests.

Most phonological awareness tests were predicted better either by reading skill or by exposure to instruction than by age. Letter reading skill was more predictive of phonological awareness than word reading skill.

While some tests could be performed by nonreaders, some tests were only performed above chance by children who were already able to read and hence we conclude that these tests depend on reading skill, and more particularly letter reading skill. We discuss the implications of these findings for theories of normal reading development and dyslexia.


Components and Consequences of Attentional Control Breakdowns in Healthy Aging and Early Stage Alzheimer's Disease

Dave Balota

+ more

A series of studies (e.g., semantic priming, semantic satition, Stroop, false memory) will be reviewed that address the nature of changes in attentional systems in healthy older adults and in AD individuals. Attempts will be made to show how attentional selection and maintainance of attentional set across time underlie some of the memory breakdowns produced in theseindividuals.


A point of agreement between generations: an electrophysiological study of grammatical number and aging

Laura Kemmer

+ more

A topic of current debate in the aging literature is whether the slowing of mental processes suggested by some measures (e.g., reaction times) is a generalized phenomenon affecting all aspects of mental processing, or whether some aspects of processing are spared. In the domain of language, it has been suggested that processing, at least of some syntactic phenomena, is slowed. However, most of these studies have examined complex syntactic phenomena (e.g., relative clauses or passive formation) and have used end-product dependent measures such as response times rather than online measures of processing. Moreover, at least some of the syntactic phenomena examined are known to have a substantial working memory component, thus making it difficult to determine whether theobserved slowing is due to limitations in working memory or syntactic processing per se or both. In an attempt to tease apart the individual contribution of syntactic processing, we used electrophysiological measures to examine grammatical number agreement. This is an area ofsyntax which does not seem to have a strong working memory component. We used an on-line dependent measure, event-related potentials, so that we could better examine the time course of processing as it unfolded. We recorded ERPs from older and younger subjects as they read sentences which did or did not contain a violation of grammatical number agreement(subject/verb or reflexive pronoun/antecedent). For young and old alike, these violation types elicited a late positivity (P600/SPS), the timing of which did not differ reliably as a function of age. The distribution of these ERP effects, however, did differ with age. Specifically, in younger adults, the syntactic violations compared to their control items eliciteda positivity that was large posteriorly and small anteriorly, and slightly larger over right than left hemisphere sites. In contrast, in older adults, the effect was somewhat more evenly distributed in both the anterior-posterior and left-right dimensions: the elderly showed morerelatively more positivity over anterior sites than the young, with a more symmetrical left-right distribution. Thus, while we obtained no evidence that the appreciation of two types of number agreement in written sentences (presented one word at a time) slows significantly with normal aging, the observed difference in scalp distribution suggests that non-identical brain areas, and thus perhaps different mental processes, may be involved in their processing with advancing age.


Lexical Decision and Naming Latencies for Virtually All Single Syllable English Words: Preliminary Report from a Wordnerd's Paradise

Dave Balota

+ more

Results will be reported from a study in which 60 participants provided naming or lexical decision responses to over 2800 single syllable words. These are the same items that have been the focus of connectionist models of word naming. In the first part of the talk, discussion will focus on the predictive power of available models at the item level, compared to standard predictors such as log frequency and word length. In the second part of the talk, analyses across the naming and lexical decision tasks will be provided that compare the predictive power at distinct levels (e.g., phonological onsets, word structure variables such as length, feedforward consistency, feedback consistency, orthographic neighborhood size, and word frequency, and meaning level variables such as imageability, and Nelson's set size metric). Discussion will focus on task specific differences and the role of attention in modulating the contribution of different sources of information to accomplish the goals of a specific task. Emphasis will also be placed on the utility of large scale databases in clarifying some controversies that have arisen in the smaller scale factorial designs that are still the standard in the visual word recognition literature.


From Theory to Practice: Addressing the Pediatrician's Dilemma

Shannon Rodrigue

+ more

Specific Language Impairment (SLI) is a disorder that can be identified on the basis of delayed onset and protracted development of language relative to other areas of development and is generally identifiable during the preschool years. A child may be identified as being at risk for SLI before age three if she is a "Late Talker," or a child with a very small productive vocabulary at around two years of age. (Virtually all children with SLI were first Late Talkers.) The "pediatrician's dilemma" refers to the logistical difficulties associated with making a determination as to which infants or toddlers might eventually be Late Talkers and thereby also at risk for SLI. Thal (2000) has made progress toward addressing this dilemma by finding that rate of growth in comprehension vocabulary (by parent report on the MacArthur CDI) at the earliest ages of linguistic development is a strong predictor of later productive vocabulary at 28 months (at the group level). The present study evaluates whether an abbreviated version of the same parent report instrument (Short Form CDI) will yield equally positive findings. I also extend upon Thal (2000) and consider prediction at the level of individual children. Findings, the implications of these findings, and future directions are discussed in terms of theoretical and applied significance.


Pushing the Limits of Word Comprehension in Normal and Aphasic Listeners

Suzanne Moineau

+ more

Most aphasiologists have agreed that, although the linguistic profiles seen in aphasics are quite complex, there has been enough evidence of similarities and differences among patients to warrant classification of these individuals into discrete groups. For more than a century now, we have known that lesions in the vicinity of Broca's area produce a non-fluent type aphasia that is characterized by telegraphic speech, with relatively preserved auditory comprehension; whereas, lesions involving Wernicke's area produce fluent type aphasias, characterized by paraphasic errors and a significant impairment in auditory comprehension. Though more recent research has uncovered deficits in the auditory comprehension of Broca's aphasics on complex, and non-canonical sentence types, there is little in the literature to suggest that Broca's aphasics have deficits with comprehension of single words, unlike Wernicke's aphasics. The differences noted in fluency and comprehension patterns have formed much of the basis for differential diagnosis of aphasia symptoms into these discrete classifiable categories. It is my contention that the deficits seen in aphasic individuals are better defined as being continuous, and as such a seemingly preserved function (like word comprehension in Broca's aphasics) may be vulnerable to breakdowns in sub-optimal processing conditions (such as noisy environments, diminished hearing associated with general aging, fatigue). The current study aimed to investigate the effects of perceptual degradation on receptive lexical processing in college-aged individuals, normal aging and individuals with brain injury (both left and right hemisphere lesions), in an attempt to uncover break points in lexical comprehension in varying populations. I won't spoil the surprise....


Coherence and Coreference Revisited

Andrew Kehler

+ more

The principles underlying the interpretation of pronominal reference have been extensively studied in both computational and psycholinguistics, but little consensus has emerged. In this talk, we revisit Hobbs's (1979) hypothesis that coreference is simply a by-product of establishing discourse coherence, in light of counterevidence that has motivated attentional state theories such as Centering (Grosz et al, 1995 [1986], Brennan et al. 1987). While proponents of Centering have correctly argued that Hobbs's account cannot model a hearer's "immediate tendency" to interpret a pronoun, we show that Centering also suffers from this drawback (Kehler, 1997). We then show how a seemingly self-contradictory collection of data patterns with a neoHumian trichotomy of coherence relations that has been used in analyses of VP ellipsis, gapping, extraction, and tense interpretation (Kehler, 2002). This data can be accounted for by modeling attention within the dynamic inference processes underlying the establishment of coherence relations, as opposed to modeling discourse state on a clause-by-clause basis using superficial cues in the manner posited by attentional state theories.


Embodiment and language discussion session

Ayse P. Saygin

+ more

A message from Ayse Saygin:

Hello everyone,

For the quarter's first CRL colloquium we will have a discussion session on the topic of embodiment and language, covering both linguistic and experimental aspects. The discussion will be moderated by Elizabeth Bates and Tim Rohrer. As usual, we are meeting in CSB 280 at 4:00 pm.

Hope to see you all there !

Ayse P. Saygin


Verb Aspect and the Activation of Event Knowledge in Semantic Memory

Todd Ferretti

+ more

Previous psycholinguistic research has shown that verb aspect modulates the activation of event information explicitly given in a text. For example, events presented as ongoing (was verbing - past imperfective aspect) are foregrounded in a reader's mental model of the discourse, and these events (including the participants and objects associated with the events) tend to remain active for long durations if there are no further time shifts in the discourse. Alternatively, events presented as completed (had verbed - past perfect or verbed - perfective) tend to be backgrounded in the reader's mental model, decreasing the activation of the event over subsequent discourse.

How verb aspect modulates the activation of world knowledge about common events has received little attention, a fact that is surprising given the important role that background knowledge of events plays in language comprehension. The main goals of the following research were to examine 1) how verb aspect influences the activation of information about events from semantic memory, 2) how people use aspect and world knowledge to make causal bridging inferences, and 3) how semantic memory and aspect interact in phrases, sentences, and larger discourses.

A number of different experimental methodologies were employed (including semantic priming, inferencing tasks, sentence completions, and ERP) to examine these issues. Results indicate that (1) knowledge of common event locations is more activated following verbs marked as ongoing (was skating - arena) than completed (had skated - arena), (2) that people complete sentence fragments such as "The diver was snorkeling...." with locative prepositional phrases more often with past imperfective than past perfect aspect, (3) that people seem to have more difficulty integrating locative phrases following verbs marked with past perfect aspect during on-line sentence comprehension, and (4) that they utilize world knowledge about the outcomes of events differently depending on the aspectual form of the verbs denoting causal actions.

These results have implications for models of how grammatical information and background knowledge interact to constrain expectations and/or inferences about events mentioned in a discourse.


"Free word order" and Focus Ambiguities: A case study of Serbo-Croatian

Svetlana Godjevac

+ more

How does scrambling interact with focus, and what are the implications for processing and acquisition? In Serbo-Croatian, informational prominence (i.e., focus) can be expressed either prosodically, by a phrase accent, or syntactically, by word order. Contrary to standard assumptions, I show that even with non-neutral word order (in this case, non-SVO), a sentence can be ambiguous with respect to focus. As an example of implications of these results, I will suggest that the claim of Radulovic (1975) that children acquiring Serbo-Croatian at the age of 1,8 through 2,8 lack pragmatic word orderings must be reconsidered. I will offer a reanalysis of her data based on my theory of focus projection that shows that Serbo-Croatian children acquire pragmatic word orderings as early as 1,8.


Towards optimal feature interaction in neural networks

Virginia De Sa

+ more

I'll start by reviewing the problem of why unsupervised category learning is difficult and present an algorithm I developed that makes use of information from other sensory modalities to constrain and help the learning of categories within single modalities.  I will then show that there is a key difference in the processing required for combining inputs within a sensory modality as opposed to that required for combining inputs between sensory modalities.  Finally, I'll show that similar issues are present in supervised learning algorithms; performance can be improved by changing the way inputs interact.  I will show examples from specially constructed problems as well as real world problems where performance is improved when some of the inputs are not used as inputs but used as outputs instead.  This last part is joint work with Rich Caruana.


"Halting in Single Word Production: A Test of the Perceptual Loop Theory of Speech Monitoring"

Bob Slevc

+ more

The concept of a prearticulatory editor or monitor has been used to explain a variety of patterns in the speech error record. The perceptual loop theory of editor function (Levelt, 1983) claims that inner speech is monitored by the comprehension system, which detects errors by comparing the comprehension of formulated utterances to the originally intended concepts.
In this study, three experiments assessed the perceptual loop theory bylooking at differences in the ability to inhibit word production in response to stop signals that varied in terms of their semantic or phonological similarity to the intended word. Subjects named pictures and sometimes heard (Experiment 1) or saw (Experiments 2 and 3) a word different from the picture name, which served as a signal to stop their naming response. When the signal was phonologically similar to the picture name, subjects had more difficulty stopping speech than when the signal was phonologically dissimilar to the picture name. This shows that inhibiting word production is more sensitive to phonological than to semantic similarity of a comprehended word, suggesting that errors are detected and avoided by comparing at a phonological rather than at a semantic level.


Modeling semantic constraints in sentence processing

Robert Thornton

+ more

We present a sentence processing model to examine semantic effects in sentence processing. Previous connectionist work on sentence processing has used SRNs, which learn distributional information regarding sequential constraints on constituents, as well as other grammatical phenomena (i.e., agreement). The current model differs from previous work both in task (recognize the current word, rather than predict the next word) and representation (distributed semantic representations rather than localist lexical ones).

The model maps distributed syllabic representations onto distributed semantic (i.e., featural) representations. We examined the interaction of lexical, semantic, and distributional constraints in processing syntactic category ambiguities, such as "the desert trains", in which "trains" can be a noun ("the desert trains were late") or a verb ("the desert trains the soldiers").

The model was trained on 20,000 word triples from the parsed WSJ and Brown corpora. For each phrase, the network was presented with each word in succession (e.g., DESERT TRAINS ARE). The target was the correct semantics for the current word. A pair of "interpretation" nodes were connected to the semantic and context representations, encoding the interpretation of the phrase as NN or NV. High level semantic features (such as ISA-ENTITY), pragmatic constraints, and item specific regularities were all combined by the network and utilized to the extent to which they were informative, replicating the results of MacDonald (1993).

More generally, the model was able to calculate distributional statistics over the distributed semantic representations. It subsequently developed a representation of the contexts that a word appears in. Because the model generated this measure of the plausible semantics of possible continuations, it began to partially activate the relevant semantic features of the upcoming word before it was presented, such that plausible continuations (i.e., words with consistent semantic features) were easier to process. Thus, in this model, contextual facilitation arose because at given point in processing, the current input reliably cued relevant semantic features of the subsequent input (see Federmeier & Kutas, 1999; Schwanenflugel & Shoben, 1985, for support for such models). The nature of such facilitation, as well as a semantic account of grammatical processing, will be discussed.


Temporal Processing and Language Disorders: Review and Evaluation

Don Robin

+ more

This discussion will overview temporal processing as a cause of language disorders in adults and children. The discussion will provide an historical overview, followed by a description of some data based studies. Finally, a discussion of the theoretical soundness of the concept with reference to a treatment called "FastForWord" will be addressed.


A Connectionist Investigation of Linguistic Arguments from the Poverty of the Stimulus: Learning the Unlearnable

John Lewis

+ more

Based on the apparent paucity of input, and the non-obvious nature of linguistic generalizations, Chomskyan linguists assume an innate body of linguistically detailed knowledge, known as Universal Grammar (UG), and attribute to it principles required to account for those properties of language that can reasonably be supposed not to have been learned (Chomsky, 1975). A definitive account of learnability is lacking, but is implicit in examples of the application of the logic. Our research demonstrates, however, that important statistical properties of the input have been overlooked, resulting in UG being credited for properties which are demonstrably learnable; in contradiction to Chomsky's celebrated argument for the innateness of structure-dependence (e.g. Chomsky, 1975), a simple recurrent network (Elman, 1990), given input modelled on child-directed speech, is shown to learn the structure of relative clauses, and to generalize that structure to subject position in aux-questions. The result demonstrates that before a property of language can reasonably be supposed not to have been learned, it is necessary to give greater consideration to the indirect positive evidence in the data and that connectionism can be invaluable to linguists in that respect.



Todd Haskell

+ more

In English and many other languages, the marking of qualities like noun number and verb tense has a quasi-regular character. To take noun number as an example, most nouns in English form their plural by adding the suffix '-s', e.g., 'rat' -> 'rats', 'book' -> 'books'. However, there are alternative ways of forming the plural that apply to only a few nouns or even a single noun, e.g., 'mouse' -> 'mice', 'goose' -> 'geese'.

Over the past two decades, there has been considerable debate over whether this sort of phenomenon is best accounted for by two mechanisms - one for the 'regular' cases, another for the 'exception' or 'irregular' cases – or a single mechanism which handles both sorts of cases. Sharp dissociations between the behavior of regular and irregular words have been used to argue for the dual-mechanism view. One apparent dissociation of this sort involves the interaction between pluralization and compound word formation. It has been noted that irregular plurals can appear in the modifier (left) position of noun-noun compounds, e.g. 'mice-eater', while regular plurals seem to be prohibited, e.g., '*rats-eater'.

The current project draws together reanalysis of previous work, new behavioral data, and computer modeling to argue that the constraints on plural modifiers in compounds are much more complex than the conventional characterization would suggest, and, as a consequence, that they are not easily accounted for within a dual-mechanism framework. An alternative account is proposed in which the acceptability of modifiers in compounds is determined by the interaction of multiple probabilistic ("soft") constraints. It is shown that such an approach, which does not make an explicit distinction between regulars and irregulars, actually provides a superior account of the data. Thus, the compounding phenomena, far from supporting the dual-mechanism view, actually present it with a serious challenge.


Age of acquisition ratings: actions and objects.

Gowri Iyer

+ more

Certain word attributes such as frequency have been traditionally thought to be the best predictors of performance on a lexical task (e.g., picture naming). However, mounting evidence suggests that in certain lexical tasks, frequency effects may be wholly or partly explained by age-of-acquisition(AoA). In my talk tomorrow I will present the results of an age-of-acquisition study in which adults' ratings and response times were collected for 520 items (nouns) and 275(verbs). The resulting AoA ratings were (1) reliable, replicating the AoA effects reported in earlier studies (for objects only), (2) valid, correlating highly with developmental data, and (3) the most powerful predictors of performance on a picture-naming task when compared to other predictor variables such as frequency etc. Discussion focuses on attempting to understand AoA's potency as a predictor and also some future directions.


Psychophysics of Verb Conjugation

Antonella Devescovi, Simone Bentrovato, Elizabeth Bates et al.

+ more

Most of what is currently known about lexical access is based on studies of English nouns, in citation form, in the visual modality, typically through some kind of lexical-decision task. There is also a small literature, important on theoretical grounds, about the processing of regular vs. irregular past tense forms of verbs (especially in English). Beyond this, surprisingly little is known about how listeners process inflected verbs -- especially in richly inflected languages, in the auditory modality. Within the context of an interactive-activation model (The Competition Model), extended to account for real-time processing (as implemented in Elman's recurrent nets), our group has been studying the processing of inflected verbs in context. It became increasingly clear to us that research on processing of inflected verbs in context is hampered by absence of basic information about how listeners recognize inflected verbs. This realization motivated us to undertake a basic parametric study of how Italian listeners perceive and process inflected verbs, presented in randomized lists. On October 2, I will attempt a "first draft" presentation of results from this study. Input from members of the CRL community (especially our colleagues in linguistics) will be not only helpful, but crucial.
Fifty native speakers of Italian (college age) participated in one of two tasks. Half were asked to repeat auditorily presented (digitized) verbs, as fast and accurately as possible (i.e. the cued shadowing technique). The other half were asked to generate a subject pronoun that agrees with the verb. Fifty different verbs (all taken from the Italian CDI to represent the first verbs acquired by children) were presented in all six person/number combinations, within four of the many tense/aspect conditions available in the language (present indicative, imperfect past, future, remote past). All analyses are conducted over items (averaging over subjects within each task), to determine the physical and linguistic properties of inflected verbs that contribute positively or negatively to reaction times in each task. Predictors include multiple measures of word length (duration of whole word; length of root and suffix after the root; length of stem and suffix after the stem; number of syllables; number of characters--a good approximation of number of phonemes in Italian), prosody (stress position; canonicity (penultimate syllable stressed)), phonetics (presence/absence of initial frication), frequency (of the whole word and of the inflected form, from a spoken-word corpus), transitivity, and whether or not the word represents a concrete action. We also assessed effects of regularity, defined three different ways (in keeping with a very confusing literature on this topic).
Results indicate that Italian listeners are exquisitely sensitive to the unfolding of word structure in real time, using multiple sources of information, quickly and efficiently. Frequency effects are observed for both regulars and irregulars, regardless of how they are defined, in contrast with predictions based on Pinker and Ullman's Dual Mechanism theory. Regularity effects on reaction time appear to be explained by lower-level factors like length, frequency and word structure. However, significant effects of tense and person (and their interaction) remain when all other predictors are controlled, suggesting that either (a) we have failed to identify all the lower-level factors that contribute to these constructs, or (b) the dimensions of tense and person are emergent properties of the system that have a causal impact on the recognition and processing of inflected verbs above and beyond their lower-level correlates. Results have (we think) some important implications for verb processing within a structured context, leading to clear predictions about the effects of context on the "recognition point" for inflected verbs.


H. Wind Cowles and Victor Ferreira

+ more

Effects of Prior Mention on Sentence Production and Word Recall Studies of language production have shown that speakers tend to place easily retrieved arguments early in sentences. For example, Bock(1977) and Bock and Irwin (1980) reveal that previously given information shows an early-mention advantage, as it tends to be mentioned before new information. They suggest that this effect may come from the greater retrievability (both lexically and conceptually) of given information over new information, rather than from the discourse status of the arguments per se. However, it is still unclear from these studies whether discourse status affects sentence production directly. Also, there are different ways to establish information as given, many of which actually confer a different discourse status to that information. The present study examines the role of discourse status on sentence production by looking at topichood and givenness. Two experiments were conducted to see if discourse status affects sentence production, and if so, whether that effect is due to a more general effect of increased lexical activation.
In the first experiment, we found that topic arguments show an early-mention advantage over given arguments, suggesting that topic- versus given-status exerts a specific effect on target sentence production. Thus, a speaker's choice of a sentence structure is sensitive not only to whether an argument is mentioned previously, but how it is mentioned, such that arguments that are previously mentioned as topics are especially likely to be mentioned early.
A second experiment looking at word recall found that the effect of discourse status goes away when speakers are asked to recall a list of words rather than use them in a sentence. This suggests that the effect of discouse status is not due to lexical activation, but is specific to the process of forming sentences.


Verbal and Non-verbal Auditory Processing in Aphasic Patients

Ayse Pinar Saygin

+ more

There have been findings indicating that left hemisphere lesions may cause an impairment in associative and/or semantic processing of auditory information, not only in linguistic but also in non-linguistic domains. In this talk, I will present a study of the online relationship between verbal and non-verbal auditory processing by examining aphasic patients' abilities to match environmental sounds and corresponding phrases to simple line drawings. In this study, we also manipulated the effect of competition between the visual target and foil in both verbal and non-verbal conditions. Overall, we found robust group differences in performance: All patient groups were impaired relative to normal controls. Broca's and Wernicke's aphasics were most impaired, while Anomic and RHD patients performed similarly to each other, showing less severe deficits. There was also a reliable effect of foil type (related vs. not related to the target) that generalized across groups. We found that impairments in verbal and non-verbal domains tended to go hand in hand; there was very little evidence for the relative preservation of non-verbal auditory processing in this set of aphasic patients, a result that is surprising based on the view of aphasia as a primarily linguistic deficit. Instead, the results suggest that there is significant overlap of processes and neural resources utilized in verbal and non-verbal processing of auditory information.


Evidence for a U-shaped learning Curve

Michael Klieman

+ more

This study examines the acquisition patterns of English unaccusative verbs by learners of English as a second language (ESL). Previous studies of written production (Oshita 1998, 2000, Zobl 1989) found that intermediate to advanced ESL learners produced ungrammatical unaccusative forms about 10% of the time, and that the vast majority of these errors were "passive" unaccusative errors (*The boys were arrived). While one of Oshita's claims is that beginners do not make such errors, he did not control for level in his study. The present study reports two experiments building on Oshita's work, this time testing three skills: spoken production, written production, and error recognition. The experiments were also crucially controlled for level. The finding was that the acquisition pattern of unaccusatives is actually U-shaped, that is, at later stages of acquisition, ESL learners stopped producing ungrammatical unaccusative verbs, and produced only grammatical ones. The results showed that not only did the error rate in both production tasks stay constant at 10%, but that learners actually stopped producing ungrammatical unaccusative forms after the advanced level (James 1985). These data indicate that ESL learners have some hope, at least with respect to the acquisition of unaccusative verbs: in later stages of acquisition, unaccusatives structures are acquired and are no longer subject to non-target passivization. These findings are significant for the field of second language acquisition research, as the pattern of acquisition shown here closely mirrors that of first language research, suggesting that there must be some parallels between the two types of acquisition.


How (Not) to Build a Language:
The Trouble With Pronouns

Ezra van Everbroeck

+ more

There is a large space of possible natural languages, but only some types are attested. This raises the question how many of the gaps are the result of historical accidents and how many are unattested because they are in some way unlearnable. Using connectionist simulations, I have explored the learnability issue by testing how easy it is to determine 'who did what to whom' in a broad range of possible languages. The linguistic parameters tested include word order, case marking, head marking, pronouns, pro-drop and agreement. In the talk, I will present results about the effect of each of these parameters and describe how they interact. I will also consider whether the connectionist models work roughly like the parsing strategies used by children acquiring a language.


Can high-dimensional memory models have affordances? Comparing HAL, LSA and the Indexical Hypothesis.

Dr. Curt Burgess

+ more

High-dimensional memory models capture meaning by encoding and transforming the contexts in which words appear (HAL: Burgess & Lund, 1997; LSA: Landauer & Dumais, 1997). Glenberg and Robinson (JML, 2000) argue that the encoding of abstract symbols that are arbitrarily related to what they signify (no symbol grounding) is an implausible approach to modeling meaning. Their subjects make sensibility and envisioning judgements to sentences that are related, afforded, or non-afforded showing a preference for afforded and related conditions; LSA shows only a relatedness effect. They conclude that high-dimensional models are crippled when dealing with novel sentences. We use the HAL and LSA models and respond to their claims with a series of six experiments. We suggest that the symbol grounding issue, as articulated by Glenberg and Robinson, is a red herring and discuss the abilities and limitations of the high-dimensional memory models with respect to modeling sentence comprehension.


96 Sentences

Frederick Dick & Marty Sereno

+ more

A fundamental challenge for developing children is making productive use of information in their environment, particularly when these cues take relatively abstract forms. One area of protracted development in this regard is children's use of sentential cues to agency, such as word order and agreement morphology. Here, we use data from a sentence interpretation task to trace the costs and benefits of informational cue use, with a special emphasis on the effects of brain damage or learning impairments on language development. We compare these data to those from similar experiments on normal adults and aphasic patients, and relate them to a frequency- and processing-based account of language processing skills.


Aging and individual differences in auditory sentence processing

Kara Federmeier

+ more

The contents and organization of semantic memory seem to remain relatively intact over the adult life-span, but less is known about how such information is accessed and used in real-time during language processing. In this talk I will present event-related potential (ERP) data collected while younger (20-30 years old) and older (60-75 years old) adults listened to pairs of sentences (as continuous, natural speech) for comprehension. The sentence contexts varied in their constraint and ended with either (1) the word most expected in the context ("expected exemplar"), (2) an unexpected word from the expected semantic category ("within category violation"), or (3) an unexpected word from a different semantic category ("between category violation"). Data from younger subjects replicated that previously observed for word by word reading with the same materials. The observed pattern suggests that the younger subjects actively use context information to prepare for the processing of likely upcoming stimuli (i.e., to predict).

In contrast, older adults' data patterned with plausibility and did not show strong effects of sentential constraint. Older adults clearly comprehend the sentences, but seem to use predictive context information less effectively. A subset of older adults, however, showed the younger response pattern, and the tendency to do so was highly correlated with several neuropsychological measures. Thus, resource availability may off-set certain age-related changes in how semantic memory is accessed during sentence processing.


Comparing Lexical Access for Nouns & Verbs in a Picture Naming Task

Elizabeth Bates

+ more

Most of what we currently know about word recognition and retrieval is based on the study of English nouns (usually concrete, monosyllabic English nouns). There has, however, been a recent resurgence of interest in the effects of form class (content words vs. function words; nouns vs. verbs) on lexical access, within and across languages. Questions and controversies about the differential processing of nouns and verbs have come up simultaneously in at least four areas: (1) potential dissociations between noun and verb access in aphasic patients; (2) functional brain imaging studies indicating partial dissociations in the neural regions that mediate nouns vs. verbs; (3) cross-linguistic studies of early child language that challenge the long-standing assumption that nouns are always acquired before verbs, and (4) real-time processing studies of noun vs. verb access, inside and outside of a phrase or sentence context. Our group has been working in all of these areas, and to serve these disparate goals, we have undertaken a large norming study comparing lexical access for concrete nouns vs. verbs. Although these studies are being conducted in several languages and modalities, our largest initiative to date has been a comparative study of action vs. object naming. In this presentation, I will give an overview of our preliminary results for action vs. object naming in English, based on 520 black-and-white pictures of everyday objects and 275 black-and-white drawings of concrete transitive and intransitive actions. Dependent variables include percent name agreement (for each item, percent of subjects who produced the dominant response, also called the "target name"), reaction time to produce the dominant response, and number of alternative names provided by the 100 subjects who participated in the study (50 subjects for object naming, another 50 for action naming). Independent variables include objective visual complexity of the pictures (based on JPG file size), and several attributes of the dominant response that are potential predictors of naming behavior, including objective age of acquisition (based on the MacArthur Communicative Development Inventories), log natural frequency, length, initial frication, word complexity and homophony (i.e. whether the same target name was given for two or more stimuli).

The most important result to date is an unhappy one for those of us who would like to compare action- and object-naming using items that are matched for difficulty on all relevant parameters: IT CANNOT BE DONE. Action naming is harder than object naming no matter what we do, and a match on one dimension invariably leads to a serious mismatch on another. Overall, action naming elicits significantly lower agreement, more alternative names, and slower RTs for the dominant/target name. Action vs. object names are also significantly different on virtually all of the independent or predictor variables -- although this difference does not always favor nouns. Not surprisingly, action pictures are significantly more complex (on average) than object pictures, and action names tend to be acquired significantly later than object names. However, action names are also significantly shorter, less complex and more frequent than object names, factors that should (in principle) make them easier to access. Correlational and regression analyses show that action and object naming are also influenced by somewhat different variables -- sometimes in opposite directions. For example, when all other predictors are controlled, frequency is associated with faster reaction times for object naming but slower reaction times for action naming. Some potential explanations for these paradoxical results will be offered, revolving around the strategies that subjects use to deal with the special problem of drawing inferences about action from a static picture. Although these results may seem very technical (and far removed from the interests of linguists and psycholinguists), they have implications for many different research areas (e.g. the four cited above) and for competing theories of the mental/neural representations that underlie nouns and verbs.


Central Bottleneck Influences on the Processing Stages of Word Production

Vic Ferreira

+ more

When producing a word, a speaker proceeds through the stages of lemma selection, phonological word-form selection, and phoneme selection. We assessed whether processing at each of these levels delays processing in a concurrently performed task. Subjects named line-drawn pictures as they performed a three-tone auditory discrimination task. In Experiment 1, subjects named pictures after cloze sentences; lemma selection was manipulated with high- and low-constraint cloze sentences, and phonological word-form selection with pictures that had high- and low-frequency names. In Experiment 2, subjects named pictures while ignoring visually presented distractor words; lemma selection was manipulated with conceptually related distractors and phoneme selection with phonologically related distractors.The lemma selection manipulations in both experiments affected tone discrimination response times as much as picture naming response times, as did the phonological word-form selection manipulation in Experiment 1. However, the phoneme selection manipulation in Experiment 2 affected only picture naming times. The results suggest that lemma selection and phonological word-form selection give rise to bottleneck effects, delaying processing in concurrently performed tasks, while phoneme selection does not.


Constructing Inferences in Text Comprehension

Murray Singer University of Manitoba

+ more

Text inference processes are explored in the framework of a constructionist theory. Three assumptions of constructionism are that: (a) readers maintain coherence at multiple levels of text representation; (b) readers access possible causes of outcomes described in text; and (c) the reader's goal regulates text processing. Two sets of experiments are described that contrast constructionism with competing theories. Alternate approaches for simulating these effects are outlined.


Lexically specific constructions in the acquisition of inflection in English

Stephen Wilson UCLA, Department of Linguistics

+ more

Children learning English often omit grammatical words and morphemes, but there is still much debate over exactly why and in what contexts they do so. This talk presents the results of a study investigating the acquisition of three elements which instantiate the grammatical category of "inflection" -- copula 'be', auxiliary 'be' and 3sg present agreement -- in longitudinal transcripts from five children. The aim is to determine whether inflection emerges as a unitary category, as predicted by recent generative accounts, or whether it develops in a more piecemeal fashion, consistent with constructivist accounts. It was found that the relative pace of development of the three morphemes studied varies significantly from child to child, suggesting that they do not depend on a unitary underlying category. Furthermore, early on, 'be' is often used primarily with particular closed-class subjects, suggesting that forms such as 'he's' and 'that's' are learned as lexically specific constructions. These findings are argued to support the idea that children learn "inflection" (and by hypothesis, other functional categories) not by filling in pre-specified slots in an innate structure, but by learning some specific constructions involving particular lexical items, before going on to gradually abstract more general construction types.


Developmental changes in sentence processing: electrophysiological responses to semantic and syntactic anomalies in 3 to 4 year old children and adults

Debbie Mills & Melissa A. Schweisguth University of California San Diego

+ more

These studies examine the development of cerebral specializations for semantic and syntactic processing in young children and adults. The ERP technique is especially well suited for studying these issues. In normal adults, semantic and syntactic processes elicit distinct patterns of ERPs that differ in timing, morphology and distribution. The characteristic patterns of ERPs have been taken as evidence that these different linguistic processes are subserved by distinct neural systems. Our approach has been to study developmental changes in the brain¹s response to single words (6 to 36 months) and in simple sentences (3 to 4 years). These studies address several questions: a) whether different neural systems mediate semantic and syntactic processing from an early age, b) to establish the developmental trajectories for these systems and how they change as a function of language development , and c) how lexical development influences and interacts with grammatical development.

Today¹s talk will focus on semantic and syntactic violations in auditory sentence processing in young children and adults. ERPs were collected as participants heard a total of 160 sentences, half with sentence-medial violations, and half controls: e.g. semantic anomaly: "When Justin is thirsty, he drinks teddy bears or soda." and word order (syntactic) violation: "When Justin is thirsty, he water drinks or soda." Children were administered a series of behavioral language tasks prior to the ERP visit. Participants were also asked to judge a subset of the sentences during ERP testing. We will present data from 16 typically developing children (11 females) mean age of 4 years (3.29-4.83) and 19 adults (all right-handed monolingual English speakers). In adults, semantic anomalies elicited a typical N400 response. In children, semantic anomalies elicited the expected late bilateral posterior negativity but also elicited an earlier bilateral anterior positivity. In adults, violations of word order elicited a posterior positive component, P600. In children, ERPs to order violations elicited an N400 response and an anterior positive response much like the pattern observed to semantic violations. We also explored patterns of activity to different types of semantic and order violations. The results were interpreted as being consistent with the hypothesis that in early language development similar neural systems subserve semantic and syntactic processing and that cerebral specializations for different subsystems develop through experience with language.


The Neural Basis of Predicate-Argument Structure

James R Hurford University of Edinburgh

+ more

The mental representations of pre-linguistic creatures could not have contained individual constants, i.e. terms guaranteed to denote particular individual objects. Hence, representations of the form PREDICATE(x), where `x' is an individual variable, seem appropriate.

Research on vision (and, less, on audition) has discerned in primates and humans two largely independent neural pathways; one locates objects in a body-centered spatial map, the other attributes properties, such as colour and movement, to objects. In vision these are the dorsal and the ventral pathways. In audition, there are similarly separable `where' and `what' pathways. The evidence comes from lesion studies on monkeys, performance testing and imaging studies on normal and pathological subjects, and psychological testing of normal subects on diagnostic tasks.

The brain computes actions using a very small number of `deictic' or `indexical' variables pointing to particular objects in the immediate scene. Parallels exist between such non-linguistic variables and the deictic devices of languages. Indexicality and reference have linguistic and non-linguistic (e.g. visual) versions, sharing the concept of ATTENTION to an object. The individual variables, x, y, z, of logical formulae can be interpreted as corresponding to these mental variables. In computing action, the deictic variables are linked with relatively permanent `semantic' information about the objects in the scene at hand. Such information corresponds to logical predicates.

PREDICATE(x) is a schematic representation of the brain's integration of two broadly separable processes. One process is the rapid delivery by the senses (visual and/or auditory) of the spatial location of a referent object relative to the body, represented in parietal cortex. The eyes, head, body and hands can be oriented to the referent object, which instantiates a mental variable. The other process is the slower analysis of the delivered referent by the perceptual (visual or auditory) recognition subsystems in terms of its properties.

Mental scene-descriptions are necessary for carrying out the practical tasks of primates, and therefore pre-exist language phylogenetically. The type of scene-descriptions used by non-human primates would be reused for more complex cognitive, and ultimately linguistic, purposes. The provision by the brain's sensory/perceptual systems of a pool of about four variables for ad hoc assignment to objects in the accessible environment, and the separate processes of perceptual categorization of the objects so identified, constitutes a preadaptive platform on which an early system for the linguistic description of scenes developed. This system was based on conjunctions of propositions of the form PREDICATE(x), involving up to about four different variables. An example of such a scene-description might be: APE(x) & STICK(y) & MOUND(z) & HOLE(w) & IN(w,z) & PUT(x,y,w)


Don Robin San Diego State University

+ more

This talk will review studies on apraxia of speech in our laboratory that focus on nonspeech motor control of the articulators. The work is designed to shed light on the underlying impairment in apraxia of speech and to provide insight into possible treatments for this devistating speech disorder. Three studies will be reviewed that point to a disorder of motor programming as the cause of apraxia of speech. In addition, preliminary work on a principled approach to treating the disorder that stems from our basic studies will be presented.


Comparing Reading And Auditory Comprehension In Aphasia

Jelena Jovanovic Department of Cognitive Science, University of California, San Diego

+ more

In this talk, I will review the basic tenets of the classical (Wernicke-Geschwind) model of language processing, and offer examples of recent findings in aphasia research that are *not* accounted for by this model. I will then present my own research findings, which address several unexplored questions about reading and auditory comprehension in aphasia, and I will discuss how they contribute to the explansion and modification of the classical model. The questions I asked are: 1. What is the relationship between reading and auditory comprehension in aphasic patients? 2. Can relative performance in reading and auditory comprehension be related to (a) aphasia type and (b) lesion location?

I evaluated these factors in 78 right-handed, single left-hemisphere stroke patients. Reading and auditory comprehension scores, as well as aphasia type, were assessed by the Western Aphasia Battery. Scores were compared across all patients, then clustered to reveal patients with comprehension advantage in one modality. Brain lesion sites were revealed by MRI. To determine common lesioned areas in patients with a modality-specific comprehension advantage, lesion sites were standardized and overlapped. My results reveal a trend toward poorer reading comprehension across aphasics, with notable exceptions. Broca's aphasics appear to have the worst reading comprehension relative to their auditory comprehension. Wernicke's aphasics show the opposite pattern: in most cases, aphasics of this type have a slight reading comprehension advantage. I conclude that reading and auditory comprehension may be differentially affected in aphasia, and in notable patterns across aphasia types. Lesion analysis revealed a small region of inferior motor cortex spared in patients with better reading comprehension, but lesioned in almost all with auditory comprehension advantage. This result supports the possibility that motor-articulatory processing contributes to reading more than to auditory comprehension.


The development of long-term explicit memory in infancy: Brain and behavioral measures.

Leslie Carver Center on Human Development and Disability at the University of Washington

+ more

The ability to remember information about the past is hypothesized to emerge in the second half of the first year of life in human infants. Although there is substantial information from both cognitive neuroscience and behavioral psychology to support this hypothesis, there is little direct evidence with which the question can be addressed. Using deferred imitation and event-related potentials (ERP), infants' memory abilities were tested between the ages of 9 and 16 months. The results of several studies indicate that there are important developments in the ability to recall information over very long delays at the end of the first year of life. Results from behavioral studies indicate that infants can recall progressively more information about the order of events over progressively long delay intervals near the end of the first year. Results from ERP studies show that these behavioral changes occur concomitantly with developments on the neurophysiological level. Furthermore, the evidence suggests that it is retrieval of information, rather than encoding, that develops. These results support the idea that the emergence of connections between medial temporal lobe structures thought to be involved encoding and storage of information and the prefrontal areas thought to be important for retrieval of order information over the very long term marks an important event in the emergence of long-term explicit memory ability. These results support the contention that the explicit memory system is emergent near the end of the first year of life.


Putting Language Back in the Body: The Influence of Nonverbal Action on Language Production and Comprehension.

Spencer Kelly

+ more

In my talk, I theorize that the human capacity for language evolved within a rich and structured matrix of bodily action. I hypothesize that if bodily action did indeed play a foundational role in the emergence of language over evolution, those effects may continue to have a powerful impact on how people use language in the present. Specifically, I examine the role that bodily actions plays in language processing and development on three levels of analysis: cognitive, neurological, and social. On the cognitive level, I will first talk about how nonverbal actions combine with speech to not only help make communication clearer for listeners, but also to help speakers think. On the neurological level, I will discuss how different actions influence how the brain processes low-level speech information from one moment to the next. Finally, on the social level, I will argue that nonverbal actions play an important role in how people understand others intentions Throughout my talk, I approach these issues from two developmental timeframes: moment to moment and ontogenetic.


The Early Word Catches the Weights: Age of acquisition effects in Connectionist networks.

Gary Cottrell Department of Computer Science and Engineering, UCSD

+ more

The strong correlation between the frequency of words and their naming latency has been well documented. However, as early as 1973, the Age of Acquisition (AoA) of a word was alleged to be the actual variable of interest, but these studies seem to have been ignored in most of the literature. Recently, there has been a resurgence of interest in AoA. While some studies have shown that frequency has no effect when AoA is controlled for, more recent studies have found an independent contribution of frequency and AoA. Connectionist models have repeatedly shown strong effects of frequency, but little attention has been paid to whether they can also show AoA effects. Indeed, several researchers have explicitly claimed that they cannot show AoA effects.

In this work, we explore these claims using a simple feed forward neural network. We find a strong relationship between the epoch in which a pattern is acquired (measured AoA) and final error on a pattern. We find this in a range of mapping tasks, from consistent mappings (identity mapping), similar to orthography to phonology, to arbitrary mappings (random mappings), similar to object naming. In almost all cases, there is also a contribution of frequency. In a simulation of a reading task, we find the standard frequencyXconcistency interaction, mirrored by an AoAXconsistency interaction. We also have begun to investigate the properties that cause to some patterns to be acquired earlier or later than others.

This is joint work with Mark Smith and Karen Anderson.


ERP Study on the Processing of Filler-Gap Dependencies in Japanese Scrambling

Mieko Ueno Department of Linguistics, UCSD

+ more

This experiment investigated the processing of so-called "scrambled" sentences in Japanese. Scrambling of sentence constituents is a common phenomenon in many of the world's languages, particularly those with rich case-marking systems (e.g. Latin). Japanese has canonical subject-object-verb (SOV) word order; in this event-related brain potential (ERP) study, direct objects were displaced from their canonical position preceding the verb to a position further to the left preceding the subject, resulting in OSV word order. Some of these direct objects were demonstrative pronouns (e.g. `this', `that'), while others were interrogative pronouns, so-called "wh"-words (e.g. `who' and `what').

The first question that motivated this study was whether such "scrambling" of sentence constituents would have the same kinds of processing effects as the formation of wh-questions and relative clauses in SVO languages like English. Wh-questions and relative clauses are analyzed in similar ways in linguistic theory: question words and relative pronouns must both occur clause-initially, and they share other syntactic properties as well. Previous ERP studies have shown that holding a displaced constituent like a question word or relative pronoun (filler) in working memory until it is assigned to its canonical position (gap) elicits slow anterior negative potentials across the sentence, and that assigning the filler to the gap elicits left anterior negativity between 300 and 500 msec to the word following the gap (Kluender and Kutas 1993; King and Kutas 1995, among others). I tested whether such filler-gap ERP effects would be elicited by scrambled sentences in Japanese as well. Unlike English wh-words, Japanese wh-words usually remain in their canonical position just like non-wh constituents (although wh-words can also be scrambled just like any other constituent); this is referred to in linguistics as "wh-in-situ", and is a common pattern for asking questions across the world's languages. The second question was then whether there would be any evidence of processing specific to this pattern of wh-in-situ in Japanese.

Stimulus sentences were mono-clausal questions with wh- and demonstrative pronouns either "scrambled" (preceding the subject) or "in-situ" (following the subject and preceding the verb), as shown in English gloss below (ACC=accusative case, NOM=nominative case).

The local newspaper-to according

[what-ACC/that-ACC] the reckless adventurer-NOM finally [what-ACC/that-ACC]


`According to the local newspaper, did the reckless adventurer finally discover what/that?'

Filler sentences manipulated the sentence position of scrambled elements, case-marking, and number of clauses to prevent strategic processing of stimulus sentences.

The results basically replicated the ERP effects in response to constituent displacement in wh-questions and relative clauses in English: slow anterior negative potentials between scrambled constituents and their gaps, and left anterior negativity between 300 and 600 msec at post-gap positions. In addition, both scrambled and in-situ wh-sentences elicited phasic right anterior negativity between 300 and 600 msec to the verb+question particle (Q) position at sentence end. This suggests increased processing load for both scrambled and in-situ wh-sentences compared to their non-wh counterparts. This may be because Japanese wh-words require a question particle (Q) attached to the final verb, and this requirement may create another type of dependency between the wh-word and the final question particle.

I conclude by discussing how these results might map onto current models of Japanese sentence processing.


What's Wrong With The Autistic Brain And Why Can't Development Plasticity Take Care Of It?

Axel Mueller University of California San Diego

+ more

Behavioral and, more recently, neuroimaging studies have demonstrated the remarkable potential of the developing brain to reorganize following insult. There is now general consensus that the developmental disorder of autism requires explanation on the neurobiological level (rather than, as previously thought, in experiential terms). Even though etiological mechanisms and neural loci of abnormality in autism are not fully established, it is clear that these abnormalities have an early (intrauterine or postnatal) onset. This raises the question why compensatory mechanisms at work following gross structural lesion are less effective (or even absent) in developmental disorders such as autism, which almost always results in lifelong cognitive impairment. I will present some recent neuroimaging studies suggesting abnormal neurofunctional maps in autism. Conventional procedures of groupwise analyses in "normalized" space may partially mask the biological bases of these findings. Very few studies have examined activation patterns in autism on the single-case level. First findings suggest that individual variation of neurofunctional organization may be abnormally pronounced, potentially reflecting diversity of etiological pathways. Activation maps in the autistic brain have been found to be unusually scattered. This may relate to suspected disturbance of neural growth regulation observed in structural studies. Lack of compensatory reorganization can be attributed to the diffuse nature of these disturbances.


"Metaphor and the Space Structuring Model"

Seana Coulson University of California, San Diego

+ more

In this talk we outline the meaning construction operations involved in metaphor comprehension, and assess the claim that the right hemisphere (RH) is specialized for this sort of nonliteral processing. The focus is on the contrasting predictions about on-line comprehension of metaphoric language made by two models of high-level language processing. One model is the the standard pragmatic model (Grice, 1975), which posits distinct mechanisms for literal and nonliteral language processing. The other model is the space structuring model, and is based on the theory of conceptual integration, also known as blending (Coulson, in press; Fauconnier & Turner, 1998). In the space structuring model, literal and nonliteral comprehension both proceed via the construction of simple cognitive models and the establishment of various sorts of mappings, or systematic correspondences between elements and relations in each.

Experiments addressed three issues: (i) whether there is a qualitative difference in the processing of metaphors and more literal language; (ii) whether the continuum of metaphoricity described above predicted on-line comprehension difficulty; and, (iii) whether the right hemisphere is specialized for metaphor processing. Results suggest that though the comprehension of metaphors is more effortful than the comprehension of literal language, the same neural resources are recruited for the construction of both sorts of meanings. Further, evidence from event-related brain potentials support a role for the right hemisphere in metaphor comprehension, but argue against the suggestion that right hemisphere semantic representations are somehow specialized for metaphor comprehension.


"Inflectional Morphology and the Activation of Thematic Role Concepts"

Todd R. Ferretti University of California, San Diego

+ more

According to most linguistic and psycholinguistic theories the assignment of a verb's thematic roles to nouns in sentences is crucial for sentence comprehension. However, despite this consensus there has been relatively little research that has investigated how detailed the conceptual information is that becomes available when verbs are read or heard. The present research addresses this issue in two ways. First, in a series of single-word priming experiments I demonstrate that verbs immediately activate knowledge of typical agents (arresting-cop), patients (arresting-criminal), and instruments (stirred-spoon). The second part of this research extended these results by investigating how people combine morpho-syntactic information (e.g., aspect) with world knowledge of events when they read verbs and noun phrases in isolation. In one experiment, subjects read verb phrases presented for a brief duration, that were marked with either imperfect (was verbing) or perfect aspect (had verbed). They then named visually presented targets that were typical locations (was skating - arena). Typical locations of events were more highly activated when the verbs referenced the situations as ongoing (imperfective) versus completed (perfect). The final experiment examined how people integrate world knowledge of agents and patients in specific events with the aspectual properties of present and past participles to constrain interpretation of isolated phrases such as arresting cop and arrested crook. An implemented competition model was used to generate predictions about how people interpret these types of phrases. The model correctly predicted that subjects combined typical patients more easily with past participles (arrested crook) than with present participles (arresting crook). Interestingly, they often interpreted phrases like arresting crook as verb phrases when the head noun was a great patient / terrible agent. Furthermore, subjects combined typical agents with present participles (arresting cop) more easily than with past participles (arrested cop). Thus the activation of world knowledge of event participants is modulated by grammatical morphemes, and people equally weight these sources of information when combining them to constrain thematic role assignment during on-line interpretation of phrases.


"A Connectionist Model of Spatial Knowledge Acquisition"

Paul Munro University of Pittsburgh

+ more

Representations of spatial location as measured by priming studies have shown dependencies on both spatial proximity in the environment and temporal contiguity during acquisition. We have simulated these results using a feed-forward network that is trained to make temporal associations over an external pattern space that has intrinsic spatial structure. The hidden unit representations develop similarity properties that capture properties from both the time and space domains. The relative influence of temporal and spatial structure on the internal representations is seen to change over the course of learning. This leads to the prediction that spatial similarity should show an initial dominance that is eventually superceded by simlarity in the temporal domain.


"Plausibility and Grammatical Agreement"

Robert Thornton

+ more

One of the central divisions in research on language processing is between theories of comprehension and production. These fields have developed largely independently, with little theoretical overlap even when dealing with the same phenomena. Four experiments were conducted to examine production/comprehension overlap by investigating the role of a probabilistic semantic factor, the plausibility of subject-verb relationships, on subject-verb agreement in English. In the production task, a verb was presented visually, followed by the auditory presentation of a sentence preamble. Participants were asked to create a complete passive sentence beginning with the preamble followed by the verb and whatever ending came to mind. The preamble contained two nouns (e.g., "the report about the senators"). The plausibility of the verb was manipulated so that either (a) both nouns could be plausible subjects (e.g., "was seen", as both reports and senators can plausibly be seen) or (b) only the subject noun could be a plausible subject (e.g., "was photocopied", as only reports can plausibly be photocopied). The comprehension task was a self-paced reading using the same materials. The results from both methodologies demonstrated robust effects of plausibility. For production, participants made significantly more agreement errors when both nouns were plausible than when only the subject was plausible. For comprehension, participants spent significantly more time reading the verb when the both nouns were plausible than when only the subject was plausible. These results will be discussed in terms of the overlap between methodologies, as well as their implications for current production models. A distributional account will be proposed that is motivated by current models of comprehension and is consistent with other recent production data.


"Dorsal And Ventral Pathways In Speech And Language Processing"

Gregory Hickok

+ more

The functional neuroanatomy of speech perception has been difficult to characterize. Part of the difficulty, we suggest, stems from the fact that the neural systems supporting "speech perception" vary as a function of task. Specifically, the set of cognitive and neural systems involved in performing traditional laboratory speech perception tasks, such as discrimination or identification, are not necessarily the same as those involved in speech perception as it occurs during natural language comprehension. Based on a review of data from a range of methodological approaches, and two new experiments, we propose that auditory cortical fields in the posterior half of the superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. Tasks which require access to the mental lexicon (i.e., accessing meaning-based representations) rely on a ventral pathway in which auditory-speech representations are mapped onto meaning; tasks which require explicit access to speech segments rely on a dorsal pathway which interfaces auditory- and articulatory-based representations of speech. We propose that the dorsal, auditory-motor interface system is critical for speech development and also subserves phonological working memory in the adult. We'll also discuss how this model can account for clinical aphasic syndromes.


"Do Children Have Specialized Word Learning Abilities?"

Gedeon Deák

+ more

Evidence that young children learn words at a prodigious rate has led developmental researchers to postulate domain-specific word learning processes. I will give a broad (but informal) overview of these proposals. I will then review evidence for and against the uniqueness of word learning qua induction. The evidence (much of it very recent) implies that general inductive processes can account for the most widely cited findings. Other evidence shows that preschoolers are not precocious in all regards, and *most* of their word learning difficulties are predictable from general conceptual and inductive factors. Preschoolers are, however, sensitive to the unique semantic and distributional properties of natural lexicons, raising interesting (if urensolvable) evolutionary questions.


Grammatical Gender Modulates Semantic Integration Of A Picture In A Spanish Sentence

Nicole Wicha

+ more

While grammatical gender is widespread across the world's languages, its role in processing is poorly understood. Wicha, Bates, Orozco-Figueroa, Reyes, Hernandez and Gavaldón (in preparation) found that gender interacts with semantic information during on-line sentence processing, to facilitate or inhibit picture-naming times in Spanish. The current study uses event-related potentials (ERPs) to further examine the nature and time course of the effect of gender in sentence processing. Native Spanish speakers listened for comprehension to Spanish sentences, wherein one of the nouns was replaced by a line drawing. The object depicted by the drawing was either semantically congruent or incongruent within the sentence context. Additionally, the object's name either agreed or disagreed in gender with that of the preceding determiner (e.g., el, la). Semantically incongruent drawings elicited a classic N400, regardless of gender agreement. ERP amplitude in the N400 region, however, was sensitive to the gender of the determiner, being smaller for mismatches than matches, especially over (pre)frontal sites. There was also an effect of gender expectation on the ERP to the article, with unexpected determiners eliciting a larger (pre)frontal negativity than expected determiners. In sum, gender and semantic information both influenced a picture's integration with a sentence's meaning, primarily over frontal regions, albeit in different ways. Listeners thus do use gender information even from articles to comprehend sentences.

Presented at the Annual Cognitive Neuroscience Society Meeting, San Francisco, CA, on April 9-11, 2000.


Reasons, Persons and Cyborgs

Andy Clark
(guest lecture; in CSB 003)

+ more

The scientific image of the nature of human reason is in a state of flux. Insights from Cognitive Psychology, Artificial Neural Networks, Neuroscience, Cognitive Anthropology and Robotics are converging on a model of human reason in which reliable environmental context, inorganic props and tools, emotional responses and (other) so-called 'fast and frugal' heuristics all play pivotal roles in the mediation of effective adaptive response. Moving in the space of reasons, it increasingly seems, is as much about moving in the space of objects as in the space of ideas. Embodied action is part and parcel of the mechanism of reason itself. The cognitive architecture that makes us what we are involves heterogeneous, shifting webs of structure and process which criss-cross the (cognitively marginal) boundaries of the squishy biological organism.


"In search of ... the lexicon"

Seana Coulson & Kara Federmeier

+ more

We review results from a series of studies that examine electrophysiological measures of lexical processing in various sorts of linguistic contexts. These findings suggest serious inadequacies in psycholinguists' conception of the lexicon.


Barbara Conboy

"Cerebral Organization For Word Processing In Bilingual Toddlers"

+ more

Throughout the history of research in bilingualism, a prevailing theme has been the question of whether two languages within the same individual are mediated by the same or different neural systems. Within-subject differences in organization of the neural systems mediating each language have been thought to be influenced by experience with each language (i.e., relative language proficiency) and/or the age of acquisition of the second language (L2). Recent fMRI, PET and ERP studies with highly-proficient bilingual adults have indicated that the organization of neural systems involved in the lexical-semantic processing of each language is linked to subjects' language proficiency and frequency of use of each language rather than the age at which the L2 was acquired. The present study explored the effects of language experience on how children raised in bilingual environments process words in each of their languages. Event-related potentials (ERPs) to known and unknown words in each language (English and Spanish) were recorded in a group of 20-22 month-old children who had regular exposure to both languages. Within-language comparisons examined the neural activity elicited by each word type over eight electrode sites in each language. Between-language comparisons examined ERP differences to known-unknown words in the dominant and non-dominant languages. Results indicated ERP patterns that were linked to language experience. Early, focally-distributed differences to known-unknown words were found for the dominant but not the non-dominant language. Later differences were found in both languages, however they were more focally distributed for the dominant than for the non-dominant language. These findings underscore the role of language experience in establishing specialization for language processing.


Instantionating Hierarchical Smenatic Relationships in a Connectionist Model of Semantic Memory

George S. Cree & Ken McRae University of Western Ontario

+ more

Past models of semantic memory have transparently represented hierarchical relationships as distinct levels of nodes connected by "isa" links. We present a connectionist model in which basic-level (e.g., dog) and superordinate-level (e.g., animal) concepts are represented over the same set of semantic features. Semantic feature production norms were used to derive basic-level representations and category memberships for 181 concepts. The model was trained to compute distributed patterns of semantic features from word forms. Whereas a basic-level word form mapped to a semantic representation in a one-to-one fashion, a superordinate word form was trained by pairing it with each of its exemplars' semantic representations with equal frequency (typicality was not built in). This training scheme mimics the fact that people sometimes refer to an exemplar with its basic-level label, and sometimes with its superordinate label. The model is used to simulate human data from typicality, category verification, and superordinate-examplar priming experiments.

CRL Talks



To subscribe to or unsubscribe from the CRL Talks mailing list, enter your name and e-mail address in the form below.

Your name:

Your e-mail address:

CRL Talks

May 2, 2017
Harry Potter and the Chamber of What?: Brain potentials to congruous words are systematically related to knowledge
Melissa Troyer (UCSD Cognitive Science)

CRL Newsletter

CRL is excited to present the latest CRL Newsletter, featuring technical report:
Language Skills and Speed of Auditory Processing in Young Children
J.A. Avenzino, M. Gonzalez Robledo, & G.O. Deák