CRL Talks

Past Talks

2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

2024-03-01

What Can Language Models Tell Us About the N400?

James Michaelov

Department of Cognitive Science at University of California, San Diego

+ more

While the idea that language comprehension involves prediction has been around since at least the 1960s, advances in natural language processing technology have made it more viable than ever to model this computationally. As language models have increased in size and power, performing better at an ever-wider array of natural language tasks, their predictions also increasingly appear to correlate with the N400, a neural signal that has been argued to index the extent to which a word is expected based on its preceding context. The predictions of contemporary large language models can not only be used to model the effects of certain types of stimuli on the amplitude of the N400 response, but in some cases appear to predict single-trial N400 amplitude better than traditional metrics such as cloze probability. With these results in mind, I will discuss how language models can be used to study human language processing, both as a deflationary tool, and as a way to support positive claims about the extent to which humans may use the statistics of language as the basis of prediction in language comprehension.

2024-02-16

Grad Data Blast

Matthew Mcarthur-- Pandemic Disruptions and Socioeconomic Status: Examining their Effects on Early Vocabulary Development

Jessie Quinn-- Syntax Guides Bilingual Language Selection

Thomas Morton-- Dynamics of Sentence Planning: Repetition from Representation to Articulation

+ more

2023-10-31

Listening effort in processing a non-dominant language: a dual task study of Heritage Spanish speakers

Zuzanna Fuchs

Assistant Professor, University of Southern California, Department of Linguistics

+ more

Studies measuring listening effort in second-language learners (L2s) have found that these speakers exert more effort to process speech in their second language than do monolingual speakers of the language. The present study, conducted in collaboration with Christine Shea and John Muegge (Univ. of Iowa), investigates listening effort in adult Heritage Speakers (HSs) of Spanish in comparison with both Spanish-dominant control speakers and L2s of Spanish. Through this comparison, we ask whether previously observed increased listening effort in an L2 may be at least partly a result of the nature of the second language acquisition experience. Like L2s, HSs are non-dominant in the target language, but unlike L2s, their input to the non-dominant language was early and naturalistic. If the nature of the acquisition process of the non-dominant language determines listening effort, then HSs may pattern differently from L2s and more like the Spanish-dominant control group. To measure listening effort in these populations, we employ a dual task study that combines a non-linguistic motion-object tracking task with a picture-selection task targeting participants’ comprehension of subject and object relative clauses. Assuming that both tasks draw from the same finite pool of cognitive resources – in line with previous dual task studies –, listening effort is operationalized as participants’ (decreased) accuracy and/or (increased) response time on the non-linguistic task as the complexity of the linguistic stimulus increases. Of interest is whether this effect is similar across the three groups or whether they show differential increases in listening effort. With data collection underway, we offer tentative results.

2023-10-24

Benefits and Applications of Web-Based Early Language Assessment

Matthew McArthur

San Diego State University

+ more

In the rapidly evolving landscape of developmental research, online assessments have emerged as a promising alternative to traditional in-person evaluations. This presentation will discuss the following advantages of using this method. Firstly, online assessments guarantee standardization, ensuring uniform instructions and stimuli for all participants, thereby reducing potential examiner-introduced variability. They offer flexibility in scheduling, enabling participants to take the assessment at optimal times and settings, boosting participation rates and minimizing scheduling conflicts. They allow for real-time data collection, streamlining the research process by capturing, storing, and processing participant responses. With the right infrastructures, these platforms ensure robust data security through encryption and secure storage mechanisms, reducing potential breaches compared to traditional methods. They help facilitate longitudinal studies by simplifying re-assessment protocols and eliminating the need for repetitive in-person engagements. Online assessment is cost-effective as it allows researchers to reduce their purchases of physical resources. Lastly, the accessibility and reach of online assessments opens doors to diverse geographical and demographic pools, enriching sample diversity and size. This presentation will elaborate on these benefits, exploring the potential of online assessments in developmental research.
Following a discussion of the broad advantages of online assessments, the Web-Based Computerized Comprehension Task (Web-CCT) will be introduced. This online language assessment is designed to measure children’s receptive vocabulary from the ages of 18 to 60 months. Unlike many existing measures, the Web-CCT is designed to be administered on a desktop, laptop, tablet, or smartphone without the need of a trained researcher. Beyond this, it can measure vocabulary earlier than other assessments, starting at 18 months. Preliminary psychometric analysis underscores its reliability and potential as an important tool for developmental research. Attendees will gain insights into its structure, functionalities, and how it is being used in research being conducted at the SDSU Infant and Child Development Lab.

2023-05-23

Duolingo: Building Fun and Effective  Learning Environments

Bozena Pajak

Duolingo

+ more

Duolingo is the leading mobile learning platform globally, offering courses in languages, math, and early literacy (and more in development!). In this talk, I will give an overview of the Duolingo app and the company's experimentation-focused approach to app development. I will also share specific examples of our in-app experiments and other research studies that have contributed to making Duolingo a unique learning product: one that engages and motivates learners while also teaching them effectively. 

Presenter Bio
Bozena Pajak is the Vice President of Learning and Curriculum at Duolingo. She has a Ph.D. in Linguistics from the University of California, San Diego, and received postdoctoral training in Brain and Cognitive Sciences at University of Rochester. Prior to joining Duolingo, Bozena was a Researcher and Lecturer in the Linguistics Department at Northwestern University. Her research investigated implicit learning and generalization of linguistic categories. At Duolingo, she has built a 40-person team of experts in learning and teaching, and is in charge of projects at the intersection of learning science, pedagogy, and product development.

2023-05-16

A history of our times

Tyler Marghetis

Assistant Professor of Cognitive and Information Sciences, University of California, Merced

+ more

This is a talk about Time. I start with the tension between, on the one hand, the global diversity in how people talk and think about time, and on the other, the sense of stability—even necessity—that we often assign to our own idiosyncratic conceptions. I then argue as follows. First, ways of talking and thinking about time are best analyzed, not as concepts within individual brains, but as heterogeneous systems distributed across brains, bodies, material artifacts, and cultural practices—that is, as “cognitive ecologies.” Second, within a cognitive ecology, mutual dependence is the rule rather than the exception. Third, since cognitive ecologies consist of such varied components as neural circuits and Twitter timelines, these ecologies exhibit change on multiple, nested timescales—timescales that range from the slow evolution by natural selection of innate biases in our brains and bodies, to the cultural evolution of language and other artifacts, to the rapid pace of situated communication and interaction. Fourth, these considerations explain the patterns in cross-cultural diversity, the stability of conceptions within communities, and the ways in which conceptions do, and do not, change over time. This argument is intended to be generic and to apply equally to our conceptions of other domains. I conclude that our conceptions of time—and number, and space—only make sense in light of their histories.

2023-04-25

Language generality in phonological encoding: Moving beyond Indo-European languages

John Alderete

Simon Fraser University

+ more

Theories of phonological encoding are centred on the selection and activation of phonological segments, and how these segments are organised in word and syllable structures in online processes of speech planning. The focus on segments, however, is due to an over-weighting of evidence from Indo-European languages, because languages outside this family exhibit strikingly different behaviour. We examine speech error, priming, and form encoding studies in Mandarin, Cantonese, and Japanese, and argue that these languages deepen our understanding of phonological encoding. These languages demonstrate the need for language particular differences in the first selectable (proximate) units of phonological encoding and the phonological units processed as word beginnings. Building on these results, an analysis of tone slips in Cantonese suggests that tone is processed concurrently with segments and sequentially assigned after segment encoding to fully encoded syllables.

2023-04-18

The Neurophysiological Basis of Expectation-based Comprehension

Matthew W. Crocker

Dept of Language Science & Technology, Saarland University, Germany

+ more

I will outline recent results from my lab supporting a single-stream model of expectation-based comprehension, under which event-related brain potentials directly index two core mechanisms of sentence comprehension. The N400 – modulated by both expectation and association, but not plausibility – indexes retrieval of each word from semantic memory (N400). The P600, by contrast, provides a continuous index of semantic integration difficulty as predicted by a comprehension-centric model of surprisal, characterizing the effort induced in recovering the unfolding sentence meaning (P600). The findings are also argued to present a strong challenge to multi-stream accounts.

2023-03-14

What L2 speakers can tell us about filler-gap dependencies

Grant Goodall

Department of Linguistics, UC San Diego

+ more

Filler-gap dependencies have long attracted attention because on the one hand, they are able to occur over long distances, while on the other hand, they are disallowed in certain specific environments, often for reasons that are still mysterious. Here I present a series of acceptability experiments on L2 speakers (done jointly with Boyoung Kim) that show that these speakers handle filler-gap dependencies in a way similar to L1 speakers, but not exactly the same, and that these L1-L2 differences can shed light on the nature of filler-gap dependencies.  

The experiments concern three instances of filler-gap dependencies where L2 speakers turn out to behave in surprising ways: 
  1) gap in a that-clause (“Who do you think [that Mary saw __ ]?”) 
  2) gap inside an “island” structure (*”Who do you wonder [why Mary saw __ ]?”) 
  3) gap as the subject of a that-clause (*”Who do you think [that __ saw Mary]?”) 
The overall pattern of results in L1 and L2 suggest that the ability to put a gap in a that-clause needs to be learned (and does not follow automatically from the ability to construct a filler-gap dependency and to embed a that-clause) and that the problem with having a subject gap in a that-clause may be related to the processing difficulty associated with having a gap at the onset of a clause. At a larger level, the results show that serious exploration of L2 speakers can be an important new source of information regarding longstanding linguistic puzzles.

2023-03-07

Bridging languages: Reconstructing “The Construction of Reality”

Michael A. Arbib

Adjunct Professor of Psychology, UCSD
Emeritus University Professor, USC

+ more

This talk presents the key arguments from a paper for the Special Issue on “The interdisciplinary language of science, philosophy and religious studies” of Rivista Italiana di Filosofia del LinguaggioVol. 17, N. 1/2023, edited by Giuseppe Tanzella-Nitti and Ivan Colagè. Feedback and criticism from people of diverse disciplines (or requests for copies of the submitted paper) will be gratefully received at arbib@usc.edu.

The search for a single interdisciplinary language for science, philosophy and religious studies is doomed to failure, and translation between a pair of languages for domains within these fields is often impossible. Crucially, interaction between domains is based on human interaction, whether directly or through documents or artefacts, and so we espouse the development of conversations between “speakers” of different domain languages. Thus we must understand the relation between the mind of the individual scholar and the emerging consensuses that define a domain. We start with the conversations that Mary Hesse and Michael Arbib developed in constructing their Gifford Lectures, The Construction of Reality. They extended a theory of “schemas in the head” to include “the social schemas of a community.” A key question for such conversations is this: “If a language is defined in part by its semantics, how can people using that language disagree?” Members of a specific community -- such as a group of scholars within the same domain – can possibly reach near agreement on the usage of terminology. Conversations between scholars in two domains thus requires that they develop or share a bridging language at the interface of their domains in which they may reach shared understandings of the terms each uses and thus reach shared conclusions or agree to disagree. This general account is then explored in relation to a case study: the bringing together of linguistics, psychology, and neuroscience in the cognitive neuroscience of linguistics.

2023-02-21

Relational climate conversations as catalysts of action: Strengths, limitations, and strategies

Julia Coombs Fine

College of St. Benedict & St. John's University

+ more

Though 58% of Americans are concerned or alarmed about the climate crisis (Leiserowitz et al. 2021), most people rarely discuss it with friends and family (Leiserowitz et al. 2019). One strategy for disrupting this socially constructed “climate of silence” (Geiger & Swim 2016) is to engage in relational climate conversations, i.e., conversations about climate issues that draw on and deepen existing interpersonal relationships. Relational climate conversations have shown promise as a means of shifting participants’ attitudes and prompting further reflection and discussion (Beery et al. 2019; Lawson et al. 2019; Goldberg et al. 2019; Galway et al. 2021), and are recommended by many organizations; however, few studies have yet examined the goals, audiences, content, interactional contexts, and specific outcomes of climate conversations.

In the first part of the talk, I discuss survey and interview data from 112 climate activists across the U.S., finding that activists mostly have climate conversations with like-minded, politically progressive friends and family members. While at first glance this finding might suggest an echo chamber effect that could exacerbate the political polarization of climate change, activists mention strategic reasons for choosing like-minded audiences and report that these conversations are moderately effective in moving audiences from passive concern to action (in the form of behavioral changes, political advocacy, and social movement participation). The second part of the talk discusses an in-progress study that more directly observes the content and outcomes of a series of dyadic climate conversations between activists and their non-activist friends, partners, and family members. Preliminary results suggest that the conversations are highly effective at influencing the non-activists to seek out more information and have further conversations with others, but are less effective at influencing them to take action. This result may be due to non-activists’ low perceived efficacy, emphasis on lifestyle changes versus political action, lack of time and resources, and lack of knowledge about what types of action are possible. Activists’ discursive strategies—mostly deferential, occasionally adversarial—provide clues as to how these barriers to action might be overcome in future refinements of climate conversation prompts.

2023-02-14

Language Structures as Explanations of Experience

David Barner

Department of Psychology, UCSD

+ more

My research program investigates the nature and origin of human thought as it is expressed through language and other symbolic systems. To do this, my lab focuses on how humans use language to (1) represent abstract conceptual content, (2) engage in logical reasoning, and (3) solve problems of social coordination. In this talk, I focus on the first problem and discuss how humans encode concepts of color, number, and time in language. Against the view that word learning reduces to a kind of mapping problem (e.g., between words and perception), I will argue that a key function of language acquisition is to build new structures that serve to explain experience, rather than just reflect it. In particular, I will argue that our representations of concepts like time and number - and to a degree color - involve rich inferential structures that extend beyond the data that perception is capable of providing. For example, relations between time words like second, minute, and hour, drive children's learning about perceptual duration, rather than the opposite, and constitute the backbone for building "theories" of time. Similarly, relations between number words like "twenty-seven" and "thirty-seven" drive conceptual discovery about the nature of numbers, and provide the basis for children's early emerging belief that numbers are infinite. Drawing on cross-cultural studies and data from the historical record, I relate this idea to the cultural evolution of symbolic systems that humans use to measure trade and debt, and to the development of technologies like writing and primitive calculators, like the abacus.

View the recorded talk on Zoom.

2023-02-07

Using Lexical and Contextual Features of Spontaneous Speech Production to Predict Cognitive Impairment in Early-Stage Dementia

Rachel Ostrand

IBM Research

+ more

Dementia, and its precursor mild cognitive impairment, affects millions of people. Traditionally, assessment and diagnosis is performed via an extensive clinical battery, consisting of some or all of multiple hours of neuropsychological testing, neuroimaging such as MRI, blood draws, and genetic testing. This is expensive and burdensome for the participant, and thus is not a practical method for regular monitoring of an at-risk person's status. However, language production has recently been shown to contain properties which are predictive of even early-stage cognitive impairment. In this talk, I'll discuss my research investigating what properties of language production correlate with cognitive status, and I'll describe how I've built an automated pipeline for computing these properties from speech transcripts. In particular, I will focus on lexical and contextual features of language: features like word frequency, part of speech counts, and linguistic surprisal. I'll present several studies with patients of different degrees of impairment, and who responded to different types of speech elicitation prompts. Across studies, certain language features, largely those which capture some facet of semantic specificity and lexical retrieval difficulty, are highly correlated with participants' cognitive status. This suggests that language production, which can be collected easily and thus relatively frequently, could be used as a remote and ongoing metric for monitoring cognitive decline.

2023-01-31

Words and Signs: How are they processed in the minds of deaf signers?

Zed Sehyr

San Diego State University

+ more

Many congenitally deaf and hard-of-hearing people process language visually through signs and written words. My work investigates how the sensory-perceptual and linguistic experiences of deaf signers’ shape language processing. I first present two recent studies that identify a unique reading profile for skilled deaf readers using statistical modeling (Study 1) and event-related potentials (ERPs) (Study 2). Study 1 assessed reading comprehension in ~200 hearing and deaf adults (matched for reading skill) and revealed a) phonological ability predicted reading scores for hearing, but not deaf readers and b) orthographic skill (spelling and fingerspelling ability) as well as vocabulary size were critical to reading success for deaf readers. Study 2 further supported this unique reading profile by showing a more bilateral N170 response when deaf signers read single words. This result was attributed to reduced phonological mapping in left-hemisphere temporal regions for deaf compared to hearing readers. These studies clarify the long-standing controversy about the importance of phonological codes and highlight the need for further research into alternative routes to literacy for deaf readers.

A separate line of my research is focused on sign processing and has led to the development of the largest publicly available lexical database for any signed language (ASL-LEX; https://asl-lex.org/). I will briefly describe this database, the novel resources it provides, and what we have learned about the phonological (form-based) organization of the American Sign Language (ASL) lexicon. We are also now developing a sister database that reveals the semantic (rather than phonological) organization of the ASL lexicon. To do so, we collected >100,000 semantic free associations from deaf fluent signers—the largest labeled dataset of ASL signs obtained to date. Analysis of phonological and semantic relations visualized using network graphs has uncovered widespread patterns of systematic non-arbitrary alignment between form and meaning (i.e., iconic networks). In my future work, I plan to leverage these linguistic insights, sign language datasets, and AI/machine learning to develop models for sign recognition.

Overall, these research strands constitute important steps toward building theories of language processing inclusive of deaf communication (written and signed) that may also help guide clinical practice in characterizing and rehabilitating language deficits in deaf individuals.

2022-11-29

Seeing what it means: What we know about how readers use active vision to derive meaning from text

Elizabeth R. Schotter

Department of Psychology, University of South Florida

+ more

Skilled reading – one of our most finely tuned cognitive skills – requires coordination of a multitude of basic cognitive processes (e.g., visual perception, allocation of attention, linguistic prediction/integration). Yet the apparent ease and automaticity of the reading process that literate adults subjectively experience belies this complexity. In this talk, I will present evidence that the efficiency of the reading process relies on the ability to read ahead – to allocate attention to words before our eyes reach them. Although this parafoveal preview provides a head start on several components, not all reading processes can occur in parafoveal vision. I will provide evidence for these claims from my past eye tracking research, as well as current work in my lab using event related potentials (ERPs) and co-registration of eye movements and ERPs during natural reading.

2022-11-22

A theory of structure building in speaking

Shota Momma

Department of Linguistics

+ more

There are many things we are not particularly good at, but one thing we are relatively good at is producing novel and structurally complex sentences that are more or less consistent with our grammatical knowledge. However, this impressive ability is notoriously hard to study scientifically, and existing theories of sentence production are not very good at capturing how speakers assemble structurally complex sentences that involve syntactically interesting phenomena, such as filler-gap dependencies. In this talk, I attempt to fill this gap. Based on various findings from our lab, I advance a theory of structure building in speaking based on a formal grammatical theory. This model is able to capture various syntactic phenomena. It also makes novel counter-intuitive predictions about one of the most well-studied phenomena in sentence production: structural priming. I present experimental evidence confirming those predictions.

2022-11-15

Dirty pigs & Stubborn mules:
Pulling the curtain back on sign language and gesture

Brandon Scates

The Underground, Brandon Scates and Bonnie Gough

+ more

The current landscape of sign language research has not shed light on the magic of animal signs in visual languages. By a series of fortunate events, we now have the unique opportunity to explore the intricacies of this little-known subfield of sign language research. Following a brief history of the citation forms of animal signs and their effect on current usage, we demonstrate the database we have developed to highlight the variation and lack of variation in the American signing community.
Cross referencing of other global sign languages illustrates several ways that natural intuitions of the human mind led to the production of gestural forms from which the lexicon of many sign languages derive. While the data are not abundant, cross-linguistic comparisons open a window into the foundational understanding of human gesture practices.

2022-11-08

Do Large Language Models Know What Humans Know?

Sean Trott

UC San Diego, Department of Cognitive Science & Computational Social Science

+ more

Humans can attribute mental states to others, a capacity known as Theory of Mind. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others’ mental states. One way to test the viability of the language exposure hypothesis is to assess whether models exposed to large quantities of human language (i.e., "large language models", or LLMs) develop evidence of Theory of Mind. We assessed GPT-3 (an LLM) on several Theory of Mind tasks, then compared the performance of GPT-3 to human performance on the same task. Both humans and GPT-3 displayed sensitivity to mental states in written text, though in two of the three tasks tested, GPT-3 did not perform as well as the humans. I conclude by discussing the implications for work on Theory of Mind as well as an empirical science of LLM capacities.

2022-05-31

(Re)activating the perception of space in language

Alper Kumcu

Hacettepe University in Ankara, Turkey

+ more

Grounded-embodied views of language assert that language in the mind is not detached from perception in contrast to traditional, symbolic approaches.  In support of this view, growing evidence shows that perception plays an important role in language processing. In particular, “sensorimotor simulation” (i.e., activations and re-activations of sensorimotor experiences triggered by linguistic stimuli) has important repercussions in several language-related tasks from semantic judgement to verbal memory. Throughout this talk, I will discuss a series of experiments in which we investigate how the perception of space can modulate how well we remember single words even though space is neither relevant nor necessary for successful retrieval. I will also address some methodological issues concerning norming words based on lexicosemantic and sensory variables. These studies overall corroborate the evidence that language is represented with sensorimotor experiences in mind, which, in turn, has certain consequences on language operations. I will discuss the results in the framework of grounded-embodied and extended views on memory and language.
Bio
I am a language researcher at Hacettepe University in Ankara, Turkey. I received my PhD from the School of Psychology at the University of Birmingham in 2019 with the dissertation titled “Looking for language in space: Spatial simulations in memory for language” under the supervision of Dr Robin Thompson. My research explores the interaction between language, perception and memory through behavioural methods, eye movements, and more recently, corpus-based investigations and with a crosslinguistic perspective. Most of my work can be accessed at https://alperkumcu.github.io/.
Related papers
Kumcu, A., & Thompson, R. L. (2021). Remembering spatial words: Sensorimotor simulation affects verbal recognition memory. Quarterly Journal of Experimental Psychology. https://doi.org/10.1177/17470218211059011
Kumcu, A., & Thompson, R. L. (2020). Less imageable words lead to more looks to blank locations during memory retrieval. Psychological Research, 84, 667–684. https://doi.org/10.1007/s00426-018-1084-6

2022-05-24

Vowel harmony functions, complexity, and interaction

Eric Baković

UC San Diego, Department of Linguistics (joint work with Eric Meinhardt, Anna Mai, and Adam McCollum)

+ more

Recent work in formal language theory (Heinz & Lai 2013, Chandlee 2014, Jardine 2016, Heinz 2018, among many others) has aimed to classify phonological patterns in terms of the computational complexity of the functions required to express those patterns. Much attention has been focused on the significant boundary between two classes of functions: the non-deterministic functions, at the outer edge of the class of functions that can be described with finite-state transducers, and the more restrictive weakly deterministic functions, first identified and defined by Heinz & Lai (2013). The distinction between these two classes of functions is significant because it has been claimed that all of phonology (Heinz 2011), or at least all of segmental (= non-tonal) phonology (Jardine 2016), is subregular, meaning at most weakly deterministic.
I have three goals in this talk within this context. The first goal is to illustrate distinctions among relevant classes of functions via the analysis of vowel harmony patterns from four languages (Turkish, Maasai, Tutrugbu, and Turkana). The second goal is to show that non-deterministic segmental phonological patterns do indeed exist, given the vowel harmony patterns of Tutrugbu and Turkana. The third goal is to provide a definition of weakly deterministic functions based on a notion of interaction familiar from ordering in rule-based phonology that -- unlike Heinz & Lai’s (2013) definition, which ours subsumes -- properly classifies the Turkana pattern as non-deterministic.

2022-05-10

The neural response to speech is language specific and irreducible to speech acoustics

Anna Mai

University of California, San Diego, Department of Linguistics

+ more

Spoken language comprehension requires the abstraction of linguistic information from the acoustic speech signal. Here we investigate the transition between auditory and linguistic processing of speech in the brain. Intracranial electroencephalography (EEG) was recorded while participants listened to conversational English speech. Through structured comparisons of the neural response to sounds in allophonic relationships with one another, sites that dissociated phonemic identity from acoustic similarity were identified. Mixed effects and Maximum Noise Entropy (MNE) models were then fit to account for the unique contributions of categorical phonemic information and spectrographic information to the neural response to speech. In lower frequency bands, phonemic category information was found to explain a greater proportion of the neural response variance than spectrographic information, and across all frequency bands, inclusion of stimulus covariance structure increased model prediction accuracy only when categorical phonemic information was available. Moreover, phonemic label information conferred no benefit to model fit when participants listened to speech in an unfamiliar language (Catalan). Thus, neural responses associated with categorical phonemic information are language specific and irreducible to speech acoustics.

2022-04-12

Visualizing lexical retrieval during speech production with ECoG

Adam Morgan

NYU Langone, Department of Neurology

+ more

During speech, humans retrieve a target word from among the 10s of 1000s of words in their mental lexicon quickly and with apparent ease.  But this process involves many complex representations, transformations, and computations, which scientists are only beginning to understand at the neural level.  Here, we employ direct neural recordings (ECoG) in awake neurosurgical patients to elucidate the neural instantiations of words’ (1) activation and (2) discrete stages of representation (conceptual, lemma, phonological, articulatory; Indefrey, 2011). 

Five neurosurgery patients repeatedly produced 6 nouns (dog, ninja, etc.) in a picture naming block while electrical potentials were measured directly from cortex. Subsequently, patients described depicted scenes involving the same 6 nouns engaged in transitive actions (e.g. “The dog tickled the ninja”). 

We were able to predict above chance (p<0.05, permutation, accuracy ~=22%) which of the 6 nouns a subject was about to produce in the ~600ms leading to articulation using cross-validated multi-class classification. Accuracy increased leading up to production onset and then decreased, suggesting that the classifiers capture a neural process akin to lexical activation rather than signatures of articulatory processing (or early visual features which were removed from analysis). We tested generalizability by applying the same trained classifier to nouns produced in sentences, showing above-chance accuracy for the first noun in the sentence.

Next, to test for discrete neural states corresponding to lexical stages, we employed a temporal generalizability approach: we trained classifiers on each time sample, then tested each of these on held-out trials, again from each time sample (following King & Dehaene, 2014; Gwilliams et al., 2020).  In contrast with most prior approaches, which manipulate lexical features (e.g., animacy, phonology) and look for resulting differences in neural data, our data-driven approach identifies stable neural states during lexical retrieval without making assumptions about what these states encode or when/where they should appear in the brain.  Results provide direct evidence for 2-4 distinct lexical states likely supporting conceptual, lemma, phonological, and articulatory representations. This is an important step towards linking neural codes to psycholinguistic constructs.

2022-02-15

Listening Challenges of Multi-Talker Environments: Behavioral and Neural Mechanisms of Focused and Distributed Attention

Elana Zion Golumbic

The Gonda Center for Brain Research, Bar Ilan University, Israel

+ more

A primary challenge posed by many real-life settings is that of appropriately allocating attention to a desired speaker in noisy, multi-talker situations. Successfully accomplishing this feat depends on many factors, related both to the acoustic properties of the competing speech as well as on the listener's behavioral goals. In this talk I will discuss recent data from our lab, where we study the cognitive and neural mechanisms underlying the ability (and the challenges) of allocating attentional resources among competing speakers, under naturalistic conditions. We will discuss the effects of acoustic load, the type of attention required, and the employment of different ‘listening strategies’, as well as the factors contributing to individual differences in attentional abilities. We will also discuss the implication of our findings on the classic debate between ‘early’ and ‘late’ selection models of attention and what we have learned about the capacity for parallel processing of concurrent speech and potential ‘processing bottleneck’.

2022-02-08

The Protolanguage Spectrum and Beyond

Michael Arbib

This talk reports on a work in progress, the writing of a chapter for a book of essays in honor of Derek Bickerton: On the Evolution, Acquisition and Development of Syntax (Dany Adone & Astrid Gramatke, Eds.) to be published by Cambridge University Press.

+ more

Rejecting Chomsky’s rejection of the notion of protolanguage, Section 1 briefly presents contrasting views on how protolanguage emerged in distinction from other animal communication systems, noting competing views on the roles of hand and voice.
Section 2 contrasts two approaches to the transition to language, Bickerton’s compositionality approach and the contrasting holophrastic approach, but then suggests how elements of both accounts may have come together in yielding language. However, I will argue against Bickerton and Chomsky’s assertion that the Merge is the key to the transition, or that the transition was all-or-none. Instead, I recall the notion of a protolanguage spectrum, and enrich it by introducing the concept of a micro-protolanguage.
The rest of the paper explores to what extent the processes that supported the transition from protolanguage to language also support processes of language change, including grammaticalization and creolization.
Section 3 summarizes some elements of the Heine-Kouteva theory of grammaticalization and seeks to assess whether, rather than depending on pre-existence of a language to grammaticalize, it can be recast in terms of more general social learning mechanisms that apply to action and cognition, not just within language, and to protolanguages.
Section 4 then seeks a similar approach to pidgins and creoles. I suggest that the line is blurred between Bickerton’s Language Bioprogram Hypothesis and at least some versions of Chomsky’s ever-changing notion of Universal Grammar. I will thus suggest that we can approach the topic while avoiding any notion of a Universal Grammar if we replace the Language Bioprogram Hypothesis with a more general Bioprogram Hypothesis that exploits mechanisms in place in (proto)humans even at the time of the emergence of protolanguage.

2022-02-01

On the properties of null subjects in Sign Languages: the case of French Sign Language (LSF)

Angélique Jaber

LLF, Paris, France; CNRS, Paris, France; Université de Paris, Paris, France; IJN, Paris, France; DEC, Paris, France; ENS, Paris, France, EHESS, Paris, France; PSL, Paris, France

Caterina Donati

LLF, Paris, France; CNRS, Paris, France; Université de Paris, Paris, France

Carlo Geraci

IJN, Paris, France; DEC, Paris, France; ENS, Paris, France; EHESS, Paris, France; CNRS, Paris, France; PSL, Paris, France

+ more

The typology of subject omission in simple declarative sentences ranges from languages that simply do not allow it like English and French to languages that allow it as long as a minimum degree of topicality is guaranteed like Chinese and Japanese. In between, there are various languages in which subject omission is licensed, for example by rich agreement like in Italian and Spanish, or by a particular set of grammatical features like first and second person in Finnish, or tense like in Hebrew. In other languages subject omission is only limited to expletive sentences like in German. This rich typology observed in spoken languages is also attested across sign languages, with one important exception: there is no known sign language disallowing subject omission categorically. The goals of this presentation are twofold: first, we apply syntactic and semantic tests to assess the boundaries of subject omission in French Sign Language and characterize it within the typology; second, we discuss in light of some particular aspects of grammars in the visual modality this apparent anomaly of sign languages.

2022-01-18

Constraints on sound structure at multiple levels of analysis

Matt Goldrick

Northwestern University (at UCSD for Jan. + Feb.)

+ more

Although understanding the mind/brain has been argued to require developing theories at multiple levels of analysis (Marr, 1982, et seq.), in practice specific research projects (my own included!) typically privilege explanations at a particular level of description. I'll discuss two recent projects from my lab that critically relied on understanding sound structure at different levels of explanation. The structured variation of external sandhi arises in part due to the nature of the psychological mechanisms that compute word forms in production. Psycholinguistic studies of implicit learning of phonotactic constraints yielded puzzles that can only be resolved by considering constraints on phonotactic grammars. Insights like these suggest researchers in the cognitive science of language may benefit from more careful attention to insights at multiple levels of analysis.

2021-11-30

Electrophysiological Insights into Figurative Language Comprehension

Seana Coulson

Department of Cognitive Science, UC San Diego

+ more

In this talk I'll describe a number of studies using scalp-recorded ERPs to address how people understand metaphoric meanings as well as literal meanings that rely on other kinds of cognitive mappings.

2021-11-02

World Knowledge Influences Pronoun Resolution both Online and Offline

Cameron Jones

Department of Cognitive Science, UC San Diego

+ more

Understanding language necessarily involves connecting linguistic input to knowledge about the world, but does world knowledge play a primarily elaborative role (fleshing out details of the core message) or can it influence the core propositional interpretation of the sentence itself. Across 3 experiments, we found evidence that knowledge about the physical world influences pronoun comprehension both offline (using comprehension questions) and online (using a self-paced reading paradigm). The results are consistent with the theory that non-linguistic world knowledge plays a constitutive role in language comprehension. An alternative explanation is that these decisions were driven instead by distributional word knowledge. We tested this by including surface statistics-based predictions of neural language models in regressions and found that physical plausibility explained variance on top of the neural language model predictions. This indicates that at least part of comprehenders' pronoun resolution judgments comes from knowledge about the world and not the word.

2021-05-25

Acquisition of plural classifier constructions in ASL: different learners, different errors

Nina Semushina

Department of Linguistics, UC San Diego

+ more

Classifiers have been attested in most of the sign languages studied to date. They are morphologically complex predicates, where the handshape represents the class that the entity belongs to and may be combined with the depiction of the location/movement. Classifiers are used to mark plural number in ASL and other sign languages. Classifier handshapes are iconic, yet it has been shown that this iconicity does not help young children to learn classifiers fast and without errors.
What happens when adult learners acquire the language? What errors can adults make and still be understood? In our talk, we will discuss three experiments investigating the use of plural classifiers predicates in American Sign Language (ASL) by deaf native ASL signers (L1), by deaf late first language learners (LL1), and by hearing second language learners (L2).

2021-04-06

Are word senses categorical or continuous?

Sean Trott

Department of Cognitive Science, UC San Diego

+ more

Traditionally, the mental lexicon is likened to a dictionary, with ambiguous words mapping onto multiple "entries". But other researchers have argued that this model fails to capture more dynamic aspects of word meaning. In an alternative model, words are viewed as cues to a continuous state-space––such that "senses" simply constitute regular patterns or "clusters" in a word's context of use. Is there evidence for the psychological reality of discrete sense categories above and beyond these statistical regularities? Or is the notion of word senses primarily a convenient abstraction? In this talk, we attempt to answer these questions using a primed sensibility judgment paradigm.

2021-03-30

Rethinking Bilingual Development and Disorder

Elizabeth D. Peña

School of Education, University of California, Irvine

+ more

In the U.S. one in five children has exposure to another language in their home or community. As such, patterns of language acquisition can be highly variable. An educational challenge in this population is how to distinguish between typical and atypical performance in L1 and L2 use. Comparison of bilingual children’s language to monolinguals may contribute to high rates of misidentification of DLD. On the other hand, assumptions of a “normal” bilingual delay may contribute to documented delays in identification and intervention. In this talk I will present data examining 1) whether bilingual children are at elevated risk for developmental language disorder (DLD); 2) how we can combine L1 and L2 performance to increase diagnostic accuracy for determining DLD in bilinguals; and 3) the nature of the “bilingual delay” using a person-based vs. a variable-based approach.

2021-03-09

Patterns and constraints in numerical symbols and words: a reckoning

Stephen Chrisomalis

Wayne State University

+ more

Among the various modalities through which humans represent numbers, the lexical numeral systems associated with the world's languages and the symbolic numerical notations mainly associated with writing systems, are the two that have attracted the most scholarly attention.  Drawing on work from my book, Reckonings: Numerals, Cognition, and History (Chrisomalis 2020), I show that these two sets of representations are both subject to cognitive constraints that structure their cross-cultural variability.  The gulf between number systems of both types that are imaginable in principle, and those that are used in practice, is enormous.  But number words and number symbols have radically different properties because they depend on different modalities and serve different functions.  The constraints on one cannot be derived from the constraints on the other.  Rather than seeing numerical notations as derivative of lexical numerals, these two systems interrelate in complex ways. Studies of numerical cognition must take account of the variability across these two modalities if they aim to more completely analyze the human number sense.

2021-03-02

Hand-me-down syntax: Towards a top-down, phase-theoretic model of sentence generation

Doug Merchant

San Diego State University

+ more

Beginning with the Minimalist Program (Chomsky, 1993, et seq.), syntactic derivations are viewed as step-by-step procedures through which incremental applications of Merge combine words into sentences. Such derivations are widely (if tacitly) assumed to proceed from the bottom-up, i.e., beginning with the most deeply embedded parts of a structure. In this talk, I contend that this assumption is an unnecessary artefact of older representational views of the grammar, and that its retention has stymied attempts to reconcile linguistic models of structure- building with psycholinguistic models of production. Having (hopefully) established this, I then survey a variety of arguments for top-down structure-building, with a focus on theory-external arguments such as the nature of temporary memory capacity (Cowan, 2001, 2015; Baddeley et al., 1987). The theory-internal evidence is somewhat scanter, but includes conflicting constituency in English and Japanese (Phillips, 1996, 2003; Chesi, 2007, 2015), as well as WH- movement in multiple questions in Bulgarian (Richards, 1999). 
Next, I consider top-down structure building in the context of phase theory (Chomsky, 2001, et seq.), in which derivations are divided into partially overlapping stages that are cyclically transferred to the interfaces. In such a system, a clause consists of a functional / discourse layer (CP/TP) above a lexical / propositional layer (vP/VP); these layers, which I argue are qualitatively different and key to understanding the dynamics of sentence generation, correlate directly with what Chomsky (2007, 2008) calls the duality of semantics. I then demonstrate how one might implement cyclic SPELLOUT in a top-down model of the grammar, one which (for independent reasons) also incorporates postsyntactic lexical insertion, presenting a full derivation. Finally, I briefly consider how island phenomena might be construed in a top- down system, and conclude.

2021-02-23

Mapping the semantic structure of American Sign Language lexicon using the ASL-LEX Database

Zed Sevcikova Sehyr

Laboratory for Language and Cognitive Neuroscience, San Diego State University

+ more

One of the central assumptions about lexical organization is that the relationship between form and meaning is largely arbitrary. However, this assumption underrepresents characteristic traits of sign languages which contain varying degrees of iconic form-meaning mappings. What role does iconicity play in shaping the lexicon? The study builds on the large-scale data collection methods and visualization techniques that we developed for the ASL-LEX database (http://asl-lex.org/) in which phonological neighborhoods are displayed. The goal of this new study is to visualize and characterize the semantic structure of the ASL lexicon in order to identify areas of iconic and non-iconic systematicity. To accomplish this, we will use information derived from semantic association data from a group of deaf ASL signers, and techniques from network science. In this paradigm, deaf signers are given a cue sign (e.g., CAT) and are asked to produce three signs that immediately come to mind (e.g., DOG, PET, PLAY). A group of trained signers will tag these responses with glosses from the ASL-LEX database. From the data, we will then construct a semantic network graph where nodes represent signs, and edges connecting the nodes are weighted based on the strength of association. To test the feasibility of this paradigm, we conducted a small-scale pilot study with 80 ASL cue signs balanced for frequency and iconicity. Three ASL signers generated three semantic associates for each cue sign in ASL, resulting in a corpus 520 tagged signs. The preliminary results revealed that 335 signs (64%) were connected to at least one other sign in the lexicon with 411 edges, suggesting good overall interconnectivity among signs. Sign pairs like TEACH-STUDENT and TEACH-SCHOOL were strong semantic associates, while TEACH-LEARN were a weakly related pair in the lexicon. Some semantic neighborhoods overlapped with phonological neighborhoods, e.g., iconic signs BELIEVE, HOPE, THINK and KNOW were semantic neighbors which overlap by location (the head) and handshape. The completed study will be the largest dataset of its kind (with ~100,000 lexically-tagged ASL videos) and a valuable resource for further research or sign recognition technologies.

2021-02-16

Bilingual language processing: Interactions between lexical retrieval and phonetic production

Maria Gavino

Department of Linguistics, Northwestern University

+ more

What is the relationship between lexical retrieval and phonetic production in bilingual language processing? Various factors related to bilingual language processing affect bilingual’s selection of context-appropriate words and speech sounds. One factor is whether bilinguals are using one (single context) or both (mixed context) languages. Increased language selection difficulty in mixed contexts (especially when the previous word is in a different language than the target word; i.e., switch context) slow down retrieval and increase accentedness. Another factor is whether a word has two highly distinct forms (non-cognates) or highly similar forms (cognates) for a concept in both languages. Increased cross language activation for cognates facilitate retrieval, but increase accentedness. In this project, 18 Spanish-English bilinguals named pictures of cognate and non-cognate words in single and mixed contexts in Spanish and English. Reaction time, voiced onset time, and vowel formants were analyzed. Results show that there are cognate facilitation effects, mixing, and switching costs for retrieval, but only consistent mixing costs for accentedness. The dissociation between these effects during lexical retrieval and phonetic production suggests continuing interactions between them after the initiation of the response.

2021-02-09

Beyond intentional proximity: Exploring self-motion to characterize effortful conversation

Carson G. Miller Rigoli

Department of Cognitive Science, UC San Diego

+ more

As anyone who has attended an awkward reunion can attest, conversation is effortful and fatiguing. This fact has recently taken greater precedence in the speech and hearing sciences as researchers increasingly recognize the importance of understanding speech in the context of naturalistic conversation. In this presentation, I will provide a summary of recent work in ergonomics, motor control, and the speech and hearing sciences that suggests that the maintenance of effort in cognitive and perceptual tasks is intertwined with the control of our bodies. Self-motion, and postural variability in particular, is one area that offers to broaden our understanding of maintenance in conversation. I will present initial exploratory work which suggests that movement variability can be used as a marker of effort in naturalistic conversation. I will conclude by discussing why an understanding of the role of self-motion in conversation is particularly important in light of widespread adoption of video call technologies.

2020-12-08

Real-time brain response to an artificial language about time

Seana Coulson

Department of Cognitive Science, UC San Diego

Joint work with Tessa Verhoef, Tyler Marghetis, and Esther Walker

+ more

Here we consider the connection between the cultural evolution of a language and the rapid processing response to that language in the brains of individual learners. In an artificial language learning task, participants were asked to communicate temporal concepts such as “day,” “year,” “before,” and “after” using movements of a cursor along a vertical bar. Via social interaction, dyads achieved above-chance performance on this communication task by exploiting an early bias to use the spatial extent of movement on the vertical bar to convey temporal duration. A later emerging strategy involved systematic mappings between the spatial location of the cursor and the temporal direction, i.e. past versus future. To examine how linguistic properties of a semiotic system relate to the demands of transmitting a language to a new generation, the language developed by one successful dyad in the communication game was used to seed an iterated language learning task. Through simulated cultural evolution, participants produced a ‘language’ with enough structure to convey compositional concepts such as ‘year before’. In the present study, EEG was recorded as healthy adults engaged in a guessing game to learn one of these emergent artificial languages. Results indicate a neural correlate of the cognitive bias to map spatial extent onto temporal duration, and suggest a dynamic dimension to iconicity.

2020-12-01

A meta-analysis of task-based differences in bilingual L1 and L2 language networks

Lindy Comstock

Department of Psychology, UC San Diego

+ more

The neural representation of a second language in bilinguals has been shown to be modulated by factors such as proficiency, age of acquisition, and frequency of use. Today, papers investigating bilingual L1 and L2 language networks number in the hundreds. Yet conclusive findings from fMRI studies as to whether the neural resources recruited by L1 and L2 can be reliably differentiated is complicated by small sample-sizes and the wide array of experimental task designs implemented in the field. One method to increase the statistical power and generalizability of findings in individual fMRI studies is to conduct a meta-analysis that averages across multiple studies. There are currently no less than seven meta-analyses devoted to various aspects of bilingual language use (Cargnelutti, Tomasino, & Fabbro, 2019; Indefrey, 2006; Liu & Cao, 2016; Luk, Green, Abutalebi, & Grady, 2012; Sebastian, Laird & Kiran, 2011; Sulpizio, Del Maschio, Fedeli, & Abutalebi, 2019; Tagarelli, Shattuck, Turkeltaub, & Ullman, 2019). However, the specific nature of many research questions in bilingual neuroimaging research and a relatively early shift in the literature away from simple language tasks to other paradigms such as language switching have resulted in a multiplicity of research designs that defy easy categorization or comparison. Thus, despite the proliferation of bilingual neuroimaging studies, the number of these that utilize the same task paradigm remain quite small and insufficient to investigate the intersection of multiple factors like AoA, proficiency, degree of exposure, and task. This talk will critique previous meta-analyses devoted to L1 and L2 processing and present the results of the first meta-analysis to group studies strictly by task with the goal to illustrate how task differences may contribute to previous results.

2020-11-24

Discussion of Online Data Collection

Hosted by: Vic Ferreira

Department of Psychology, UC San Diego

+ more

This week we are doing things a little differently!  With COVID, once again, getting worse and not better, many of us have been trying to move to online data collection for our experimental data collection (or alter our data collection to behavioral online experiments).  Therefore, to help each other in this transition, we will be having a group discussion about online data collection methodologies.  Please think about any questions you have regarding online data collection and any feedback or aid you could give to others.  

Please fill out the Google form you received from the CRL Talks mailing list with questions you have or topics for discussion so we can scaffold the discussion! 

Also, feel free to come and just listen.  Hopefully, we will be sharing a lot of expertise and discussing the answer to some pressing questions, so this will definitely be one to not miss!

2020-11-17

Why do human languages have homophones?

Sean Trott

Department of Cognitive Science, UC San Diego

+ more

Human languages are replete with ambiguity. This is most evident in homophony––where two or more words sound the same, but carry distinct meanings. For example, the wordform “bark” can denote either the sound produced by a dog or the protective outer sheath of a tree trunk. Why would a system evolved for efficient, effective communication display rampant ambiguity? Some accounts argue that ambiguity is actually a design feature of human communication systems, allowing languages to recycle their most optimal wordforms (those which are short, frequent, and phonotactically well-formed) for multiple meanings. We test this claim by constructing five series of artificial lexica matched for the phonotactics and distribution of word lengths found in five real languages (English, German, Dutch, French, and Japanese), and comparing both the quantity and concentration of homophony across the real and artificial lexica.

Surprisingly, we find that the artificial lexica exhibit higher upper-bounds on homophony than their real counterparts, and that homophony is even more likely to be found among short, phonotactically plausible wordforms in the artificial than in the real lexica. These results suggest that homophony in real languages is not directly selected for, but rather, that it emerges as a natural consequence of other features of a language. In fact, homophony may even be selected against in real languages, producing lexica that better conform to other requirements of humans who need to use them.

We then ask whether the same is true of polysemy (in English), a form of lexical ambiguity in which the same wordform has two or more related meanings. Unlike homophony, we find that at least in English, wordforms are more polysemous than one would expect simply on account of their phonotactics and length. Combined, our findings suggest that these forms of ambiguity––homophony and polysemy––may face distinct selection pressures.

2020-11-10

Conceptual combination and the emergence of privativity

Joshua Martin

Department of Linguistics, Harvard University

+ more

Privativity is a linguistic phenomenon in which an instance of modification results in an output (usually, a modified noun phrase) disjoint from the input (the bare noun), e.g. the set denoted by fake door contains no members of the set denoted by door. So-called privative adjectives (fake, counterfeit, mock, etc.) are those which uniformly license this inference; that is, for a privative adjective A and any noun N, X is an AN -> X is not an N. This is contrasted with the standard intersective inference licensed by more basic property adjectives, where X is an AN -> X is an N. In this talk, I will introduce the topic of privative composition as a problem for current theories of formal semantics, which are unable to successfully predict the result of such modification, and argue that a successful theory of privative composition requires reference to enriched lexical meanings reflecting the structure of concepts. I will present some preliminary experimental data on variation in privative meanings, which motivate a reframing of the problem as one of 'privative compositionality', rather than of 'privative adjectives', divorcing the notion from a specific lexical class and treating it rather as an emergent phenomenon of the compositional process. A lexically enriched theory which treats privativity as a contingent byproduct of conceptual combination, rather than a grammaticalized process, I argue, is both a better account of the empirical picture and raises interesting questions about how conceptual structure is realized in semantic composition. Privativity is shown to have distinct syntactic reflexes, as well, and so serves as a case study for the interaction of conceptual meaning, semantic composition, and syntactic structure.

2020-10-27

Mindfulness meditation engages a novel brain-based pain modulatory pathway

Fadel Zeidan

Department of Anesthesiology, UC San Diego

+ more

Mindfulness-based practices reliably reduce pain. Recent findings demonstrate that mindfulness-induced analgesia engages multiple, distinct mechanisms. In two separate neuroimaging studies, we found that mindfulness-based analgesia was associated with greater a) thalamic deactivation and b) prefrontal cortical (PFC) activation. We also discovered that mindfulness does not engage classical, endogenous opioidergic systems to reduce pain. We have developed a working hypothesis postulating that mindfulness-induced shifts in executive attention (non-reactive attention to the breath) facilitate pain-relief by engaging PFC-driven inhibition of the thalamus to reduce the elaboration of nociceptive information in somatosensory cortices.  This presentation will provide a comprehensive delineation of the psychological, endogenous, autonomic, and neural mechanisms supporting mindfulness-based analgesia.

2020-10-20

Memory for stimulus sequences

Stefano Ghirlanda

Brooklyn College and Stockholm University

+ more

Humans stand out among animals for their unique capacities in domains such as language, culture and imitation, but it is difficult to identify basic cognitive elements that are specifically human. In this talk, I will present evidence that, compared to humans, non-human animals have a more limited capacity to discriminate ordered sequences of stimuli. Collating data from over 100 experiments on stimulus sequence discrimination, I will show that animals commit pervasive and systematic errors, such as confusing a red–green sequence of lights with green–red and green–green sequences. These errors can persist after thousands of learning trials. Overall, the data are consistent with the assumption that non-human species represent stimulus sequences as unstructured collections of memory traces. This representation carries only approximate information about stimulus duration, recency, order, and frequency, yet it predicts non-human performance accurately. Lastly, I will present ongoing efforts to test sequence discrimination abilities in bonobos.

2020-10-13

The Language Organ: Architecture and Development

William Matchin

Department of Communication Sciences and Disorders, University of Southern Carolina

+ more

The concepts of “the language organ” and “the language acquisition device” advanced by Chomsky and others in the tradition of Generative Grammar are controversial in part because their neurobiological instantiation is unclear. Here I address this by reviewing recent evidence from brain imaging and lesion-symptom mapping in aphasia. I propose a model for how the language organ develops in the brain in the context of the recent thesis of the architecture of the human language faculty and its evolution by Berwick & Chomsky, 2016. In this proposal, an innate syntactic computational system combines with domain-general sequencing systems, which become gradually specialized for speech externalization during language development.

2020-06-02

Selective Activation of Semantic Features

Alyssa Truman

Department of Cognitive Science, University of California, San Diego

+ more

The question of whether activation of our conceptual system is fixed or flexible has been long debated. Spreading activation suggests that upon encountering a concept we activate other concepts that are related to it. However, this account tells us nothing about the dynamic nature of our semantic system. Do we activate all the relevant information about a concept, and the words related to it, each and every time we retrieve a concept? Or do we activate only the task or context relevant information about a concept, and the words related to it? In our study we wanted to assess what type of information is activated dynamically by looking at the modulation of ERPs for 3 types of word pairs: unrelated word pairs, related word pairs that share the feature that the subjects were required to attended to by task, and related word pairs that share a feature that subjects were not required to attend to. We found ERP modulation of the target word for attended-related vs unrelated word pairs in 3 time windows: 50-150ms, 350-450ms and 500-700ms. We did not find a statistically significant differences between unattended-related and unrelated word pairs in either task, however, we saw that the mean amplitudes of feature unattended target words were more positive than the mean amplitudes of unrelated target words in N400 time window. Our findings, taken together, point to a flexible view of semantic activation that is not automatic and is affected by context, suggesting strong top down activation of (task) relevant features.

2020-05-26

Learning Structural Alternations: What guides learners’ generalization?

Sin Hang Lau, Shota Momma, and Victor S. Ferreira

Department of Psychology, University of California, San Diego

+ more

Speakers can sometimes flexibly choose between multiple structures to describe the same event (e.g., prepositional vs. double object datives in English), but this flexibility is sometimes constrained (e.g., using only PD but not DO with “donate”). Additionally, typologically different languages have different degrees of flexibility. For example, languages that allow scrambling (e.g., Korean) have more flexible word orders than those that do not (e.g., English). The question we address here is whether learning these language-general patterns comes from simple exposure, versus being “helped” by some sort of internal bias. 
In this talk, I present work that examines whether learners generalize structural alternations when learning a typologically different grammar, and if so, what guides their generalization. Three miniature novel-language learning experiments tested usage-based/statistical accounts against an internal bias account in both production and comprehension. Usage-based/statistical accounts predict that learners should largely constrain their production and acceptability of alternations to the verb that they learned the structures with and show limited generalization to other verbs. In contrast, an internal bias account predicts that learners who are exposed to scrambling should show a more liberal generalization pattern than those who are not, if learners indeed have tacit knowledge about the typologically relevant evidence that scrambling signals flexibility. The critical manipulation across our experiments was that half of the participants learned a grammar with no evidence of scrambling, whereas the other half saw scrambling. Our results revealed that the two groups showed different generalization patterns in both production and comprehension. Participants who saw no evidence of scrambling produced and accepted alternations with the verb that they learned the structures with, but to a smaller extent with novel verbs; whereas participants who saw scrambling produced and accepted alternations equally across all verbs. Our results suggest that learners do not merely track statistical patterns in the input but also use internal linguistically sophisticated biases to generalize structural alternations, supporting an internal bias account.

2020-05-19

Does Inhibition Help Bilinguals Switch Languages?
Evidence from Young and Older Bilinguals

Tamar H. Gollan, PhD

Professor of Psychiatry, UCSD

+ more

People love to invoke inhibition to explain all kinds of human behavior, but they also fight frequently and furiously over what inhibition is, how to measure it, and if it’s impaired in aging or not. This talk will briefly review some evidence for and against the Inhibitory Deficit Hypothesis, and then will explore reversed language dominance – a uniquely powerful signature of inhibition that can be found in bilinguals when they name pictures and are cued to switch back and forth between languages. Surprisingly, this sometimes leads bilinguals to speak more slowly in the language that is usually more proficient (i.e., dominant). Although this is arguably one of the most striking demonstrations of inhibitory control, it has rarely been studied in aging bilinguals. I will present a study that Alena Stasenko, Dan Kleinman, and I designed to reveal if inhibition transfers from one set of pictures to a new set of pictures in cued language-switching, and if such inhibition is impaired in aging or not. The results revealed strong evidence for global inhibitory control of the dominant language, and that older bilinguals either can’t or simply don’t apply control in the same way. This paints a picture that features inhibition as a basic mechanism of bilingual language selection, but other evidence we have suggests not necessarily in a simple “better-inhibition = better-bilinguals” manner. While the notion of inhibitory control as a unitary construct has been challenged repeatedly in heated debates across multiple sub-literatures in the field, I’m gonna try to convince you that dispensing with it is premature as it continues to lead to fruitful lines of investigation.

2020-05-12

Towards a functional anatomy of language: a 20 year retrospective

with Gregory Hickok and David Poeppel
moderated by William Matchin

+ more

A discussion with Gregory Hickok and David Poeppel to celebrate the 20th anniversary of their dual-stream model hosted by William Matchin.

This talk is on YouTube: https://www.youtube.com/watch?v=6GgeLbhXeCg.

2020-05-05

Lap or Lab: The cross-domain entrainment effects from pure tones to speech perception

Tzu-Han Zoe Cheng and Sarah C. Creel

UCSD, Department of Cognitive Science

+ more

Temporal context influences how humans perceive the durations of acoustic events. Recently, entrainment, a hypothesized process in which internal oscillators synchronize with the external temporal sequences, was proposed to be the underlying mechanism of timing estimation at short time durations. However, this approach has predominantly been tested on simple, music-like acoustic stimuli (e.g. pure tones). This raises the question of whether the same temporal context mechanisms are operative in other, more complex acoustic domains such as speech sound perception, which some have claimed to operate in modular fashion. In the current study, we first demonstrated an entrainment effect in restricted time range in our Experiment 1. Based on this finding, we extended the paradigm to spoken language. In Experiment 2, we investigated if an entrainment effect from a series of pure tones could influence syllable categorization (i.e. /ba/ vs. /pa/) and word (i.e. lap vs. lab) categorization, which were created by varying the durations of the sound. Our findings suggest that entrainment from pure tone contexts may influence speech sound categorization. Data from these and ongoing studies has the potential to reveal a general mechanism of short-duration entrainment that can explain temporal context effects on timing perception in acoustically diverse domains.

2020-04-28

Perception of ATR vowel contrasts by Akan speakers reveals evidence of near-merger

Sharon Rose1
Michael Obiri-Yeboah1
and Sarah Creel2

1 UCSD Linguistics
2 UCSD Cognitive Science

+ more

Vowel systems in many African languages show contrasts for Advanced Tongue Root (ATR), in which the tongue root is retracted or advanced. These systems have been widely studied from phonological perspectives as they typically exhibit vowel harmony, where words contain only advanced (+ATR) vowels or non-advanced (-ATR) vowels. They have also been subject to numerous acoustic and articulatory (X-ray, ultrasound) studies. Yet, very few perceptual studies of ATR distinctions by speakers of ATR languages have been conducted. This is despite anecdotal reports that some ATR vowel contrasts are hard to perceive (Casali 2017) and that certain ‘weaker’ -ATR vowels tend to merge and neutralize in these systems. In this study, we investigate perception of ATR contrasts by Akan speakers, assessing whether phonemic contrast or phonetic similarity influence perception of ATR distinctions. 

Akan (Kwa language of Ghana) has ten vowels divided into two sets: +ATR [i e o u ɜ] and -ATR [ɪ ɛ ɔ ʊ a]. Nine vowels are phonemically contrastive, whereas [ɜ] is non-contrastive and created via vowel harmony. Two hypotheses are examined. The phonological hypothesis predicts that participants will rely on phonemic contrast and perform well on phonemically contrastive vowel pairs, but not those with allophonic [ɜ], which only occurs preceding +ATR vowels due to vowel harmony. The acoustic hypothesis predicts that participants will rely on acoustic similarity, and highly similar vowels will pose perceptual difficulties. The similar acoustic vowels are [e]/[ɪ], [o]/[ʊ] pairs, vowels which differ in two phonological features, height and ATR, but are phonemically contrastive.  

Two experiments were run with a total of 82 subjects in Ghana. Experiment 1 was an AX discrimination task with monosyllables. Results showed that participants had a >90% accuracy rate at detecting all ATR contrasts, including the allophonic [a]/[ɜ] pair, and a >80% accuracy rate on all Height contrasts (ex. i vs. e). However, pairs contrasting both Height and ATR had only a 30% accuracy rate (still above a false-alarm baseline of 8%). These results support the acoustic hypothesis. Nevertheless, there were some factors in the experiment that make these results difficult to interpret on their own. The use of monosyllables may have impacted perception, as the +ATR vowels [e o] are rare in monosyllables. Vowel harmony may also affect perception differently in longer words. Experiment 2 was an ABX discrimination task with bisyllables, using stimuli with identical vowels (gebe / gɛbɛ / gɛbɛ) or non-identical vowels (gɛbe / gɛbɛ / gɛbɛ). Results were similar to those in Experiment 1: For identical stimuli, acoustically similar Height-and-ATR-mismatched pairs were still poorly discriminated (58%, where 50% is chance) compared to other pairs (84%). The same discrimination rate was found for non-identical pairs (58%). This poor discrimination further impacted how subjects perceived non-identical disharmonic words, as words that violated vowel harmony (ex. gɛbe) were perceived similarly to those that had different height (ex. gibe). Overall, these results suggest that Akan exhibits near merger, the situation whereby speakers differentiate sounds in production, but report that they are ‘the same’ in perception tests.

2020-04-21

Meaning identification using linguistic context in children with diverse language experiences

Alyson D. Abel

Assistant Professor, SDSU and SDSU/UCSD JDP-LCD; Director, Language Learning Lab

+ more

When learning a new word, the learner has to identify both the word form and its meaning, map the form and meaning and, over time, integrate this information in the mental lexicon. Much of the word learning literature has focused on acquisition of the word form with less attention paid to the process of meaning identification. Given the importance of robust semantic knowledge on a word’s representation in the mental lexicon, the lack of focused research on meaning identification is a significant gap. Additionally, much of the word learning literature has examined younger children, in the infant through preschool range. Younger children often use information in their environment, such as a physical referent and explicit instruction or guidance from parents, to help them identify a new word’s meaning. On the other hand, school-age children and adults rely more heavily on the information in the linguistic context for meaning identification. For children with diverse language experiences, including children raised in low socioeconomic status (SES) environments, children who are bilingual, and children with language impairment, the process of using linguistic context to identify a word’s meaning may be affected. In this talk, I will introduce an experimental meaning identification task, where participants are introduced to unfamiliar word forms in sentences that vary in their support for identifying the unfamiliar word’s meaning. Behavioral performance is assessed by whether they can identify the word’s meaning EEG is collected during the meaning identification task with analyses focused on the N400 ERP component. Behavioral and ERP data will be presented from four groups of school-age children, three groups of children with typical language development (monolingual middle-SES children, children from low SES homes, and bilingual children) and children with specific language impairment. Findings will be interpreted within a framework of how different language experiences shape behavioral and neural indices of meaning identification.

2020-04-14

Cost-free language switching in connected speech? Syntactic Position matters

Chuchu Li

Department of Psychiatry, UC San Diego

+ more

Bilinguals spontaneously switch languages when conversing with other bilinguals in real life, although numerous laboratory studies have revealed robust language switching costs even when switching is voluntary or predictable. One reason is that some words are more accessible in the other language, and accessibility-driven switches can be cost-free in isolated word production (e.g., single picture naming; Kleinman & Gollan, 2016). The present study examined whether accessibility-driven language switching is costly in connected speech. We measured bilinguals’ speech onset latency as well as the production duration of each word before the switch word to monitor when switch costs occur. Two experiments consistently showed that lexical accessibility-driven switching sometimes is cost-free in connected speech, but it depends on where switches occur. Words trained to be lexically accessible enabled cost-free switching when produced across phrases (e.g., The butterfly moved above the mesa), but switch-costs returned when bilinguals had to produce language switches within a phrase (e.g., The butterfly and the mesa). This is probably because phrase is the default speech planning unit in sentence production (Smith & Wheeldon, 1999; Martin et al., 2010). In contrast to isolated word production, when bilinguals produce connected speech, they select a default language, that is, most of words are produced in this language. Our results suggest that default language selection operates over planning units, not whole language selection, in bilingual speech production.

2020-04-07

The PASCAL Pen: Pressure Sensing for Cognition, Autonomic Function and Language

Carson Miller Rigoli1 Eve Wittenberg2

1 UC San Diego, Cognitive Science
2 UC San Diego, Linguistics

+ more

This talk will introduce PASCAL, a collaborative project to develop a handgrip-pressure sensitive pen for measuring cognitive workload and mental stress during written language production. Mentally challenging tasks – solving a tricky math problem, writing a difficult word, switching from one language into another, suppressing an urge to swear, even trying to lie effectively – impact not only cognitive processing 'in the head', but lead to a cascade of physiological responses throughout the body. Systematic effects of mental stress can be observed through changes in pupil dilation, cardiovascular activity and respiration. There is evidence indicating that cognitive tasks also lead to changes in muscle tension, for instance during handwriting. Handgrip pressure may therefore provide an additional measure to supplement pupillometry and galvanic skin response to understand cognitive workload in reading, writing, mathematics and other socially and academically critical tasks. Recent advances in soft-matter physics have made the construction of light-weight, flexible, and accurate pressure sensors more affordable than ever, allowing our collaborator Dr. Annie Colin and her lab at ESPCI Paris to create a set of pens with integrated pressure sensors built into their shafts. I will present some of the technical challenges in developing this tool, including issues related to study design, hardware design, data acquisition and analysis tools. I will also discuss preliminary findings from pilot data collected in our team's efforts to evaluate the validity of these pens as research tools in psycholinguistics and introduce plans for continued testing and opportunities for research with PASCAL.

2020-03-10

How Information Structures Within Caregiver-Infant Interactions Support Early Language Learning

Gedeon Deák

UC San Diego

+ more

Longstanding debates concerning the possible specialization of cognitive resources for primary language learning cannot be resolved without models of detectable information patterns (i.e., covariance statistics) in language models available to infants. Ultimately the debate requires an empirical account of whatever language-predictive information patterns infants attend to, encode, and remember.
I will discuss this problem in general, and describe some research to sketch preliminary partial answers to these questions (in one language). Most results are from a longitudinal study of healthy English-learning infants and their mothers during the first year. Findings include: (1) moderate maternal longitudinal stability of language output, and modest relations to infants' later language competence; (2) sequential patterns in mothers' speech that could help infants partly predict the content of the next utterance; (3) non-random patterning of maternal speech content with infants' and mothers' concurrent manual and visual actions; (4) corpus analyses (by Lucas Chang) showing that age of acquisition of specific words is predicted by multiple levels of speech co-occurrence patterns, ranging from syntactic to discourse-level patterns. These results suggest that multiple sources of predictive information are available to scaffold infants' acquisition of words, phrases, and lexical relations. As time allows, I will describe ongoing efforts to expand our corpus, and to incorporate prosodic information as another channel of predictive patterning.

2020-03-03

Social robots and language technologies

Janet Wiles

School of Information Technology and Electrical Engineering, The University of Queensland, Australia (currently on sabbatical in Cogsci UCSD)

+ more

This talk will introduce research into social robots and language technologies at the ARC Centre of Excellence for the Dynamics of Language (CoEDL), a 7-year multi-university Australian collaborative research centre. I will present a series of case studies in technology design, based on the child-friendly robots in the ongoing Opal project (a sister project to UCSD’s RUBI project) and introduce some of our group’s analysis tools for interaction dynamics. Opie robots are primarily designed for physically-embodied social presence, enabling children and adults to touch, hold, or hug the robot, and to use its solid frame as a physical support. The key design issues start with safety of the users, which affects movement and speed; and safety of the robot from rough play by children. These capabilities enable different kinds of studies to those of commonly available commercial robots which are too fragile for rough use, or in danger of falling over, with potential for damage to themselves and a danger to children. The second design consideration for Opie robots is the speed of interaction, and how that can impact on human social engagement, which occurs in timescales of hundreds of milliseconds. Case studies will include Opie robots as story tellers in public spaces, such as science museums and technology showcases; robots as language assistants in classrooms and language centres; Lingodroids that evolve their own languages; and chatbots used for surveys and conversation. Insights from the development of multi-lingual robots include the critical role of embedding robots in communities and the extended nature of the robots’ influence within and beyond a classroom. The Opie real-world case studies enable us to reflect on fundamental questions about design decisions that affect when a robot is considered to be a social being; whether a robot could understand the grounded meanings of the words it uses; and practical questions about what is needed for a robot to be an effective learning companion in individual and group settings. The talk will conclude with a brief overview of two computational tools for analysing human-human and human-robot interactions, including conceptual recurrence analysis of turn-taking in conversations using Discursis and the timing of interactions using Calpy’s pausecode.

2020-02-25

Ask Not What Bilingualism Does for Cognition, Ask What Cognition Does for Bilingualism!

Anat Prior

Faculty of Education, University of Haifa

+ more

The question of whether human language is a specialized skill (that operates as a module) separate from other cognitive abilities or not, has a long history in psychology.  Recent studies suggest that bilinguals may rely on domain general mechanisms of attention and goal maintenance to prevent interference between their two languages, ultimately leading bilinguals to have an improved ability to manage interference in a broad range of nonlinguistic tasks relative to monolinguals. These findings imply that language is not specialized and is not separate from other aspects of cognition. However, group comparisons between monolinguals and bilinguals are problematic, as there are many associated group differences besides language use (e.g. cultural background, immigration status and others). 
I will present three lines of research that adopt a different approach to investigating if language is specialized or not. Instead of looking for cognitive advantages in bilinguals, these studies directly investigated how bilinguals might recruit general cognitive abilities in language processing. Study 1 did not find evidence that bilinguals recruit executive functions to manage language competition; In contrast, Study 2 demonstrated that bilinguals can use perceptual cues to facilitate language control, and Study 3 found that trilinguals can and do apply higher-order cognitive abilities to support their comprehension across languages. Taken together, results of these studies do not support a strict specialization of language, but rather suggest complex and reciprocal language-cognition relations.

2020-02-18

Early language matters: Insights from Deaf with extremely late sign language onset

Qi Cheng

University of California, San Diego

+ more

Deaf individuals often have limited access to language during early life, and their delayed sign language onset can lead to later language deficits, especially at the morphs-syntactical level. Examining the effects of lacking early language in this population can help us better understand the role of early language in typical first language development. In this talk, I will focus on the syntactic development of American Sign Language among Deaf individuals with extremely late sign language onset, combining observations from three levels: longitudinal development, sentence processing strategies, and brain language pathways. Study 1 presents a longitudinal study of 4 Deaf late signers on their word order development during early stages. Study 2 uses a sentence-picture verification experiment to examine whether Deaf late signers robustly rely on word order to comprehend simple Subject-Verb-Object sentences, even when the sentence meaning is implausible. Study 3 looks at the connectivity patterns of major language pathways in the brain using diffusion tensor imaging. Altogether, the findings from these studies suggest profound effects of early language deprivation on language development at the mono-clausal level.

*Sign language interpreters will be present*

2020-02-11

Is language control domain general?

Mathieu Declerck

Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, USA 
Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands

+ more

One of the major discussions in the bilingual literature revolves around whether language control, a process that makes sure that a bilingual uses the correct language even though both languages are typically activated, is domain general. In this talk, I will give an overview of the literature regarding domain-general language control, with a focus on studies that contrasted language switching, which allows for measures of language control, and task switching, which allows for measures of more domain-general non-linguistic control. I will also present the first ERP data comparing language and task switching.

2020-02-04

A predictive coding account of sociolinguistic stereotype formation

Lindy Comstock

UCLA

+ more

The practice of mixing languages (aka “code-switching”, Myers-Scotton, 1995) provides a unique opportunity to study sociolinguistic expectancy violations. Bilingual code-switching occurs most often among friends and family, representing a highly emotional expression of ingroup status. For the Mexican Latino (ML) heritage community in Los Angeles, refraining from code-switching with proficient Spanish speakers of this ethnic group may constitute a sociolinguistic expectancy violation, whereas monolinguals often censure code-switching, attributing bilinguals who code-switch a non-normative outgroup status. Expectancy violation theory states there are evaluative consequences to violating social expectations (Burgoon, 1993, 1995). When others reproduce our expectations, we evaluate them positively; when others confound our expectations, we perceive them negatively, irrespective of any real offense (Burgoon, 1990; Lemak, 2012). Predictive coding theory (Friston, 2009; Huang & Rao, 2011; Rao & Ballard, 1999) suggests these evaluations may persist, blinding us at least temporarily to new data that contradict our past assessment of an individual. Our ability to perceive changes in behavior once our predictions have become entrenched is called into question. By first training participants on stereotypes, then introducing behavior contradictory to those stereotypes in a language proficiency rating task, our study assesses which individuals will internalize stereotypes and which will actively update their expectations, reducing the perception of a violation, and thus the severity of a rating penalty. The background characteristics of individuals who engaged in stereotyping behavior are analyzed and contrasted with those of individuals who displayed no stereotyping in the rating task.

2020-01-28

Perception precedes production in native Mandarin speakers of English

Madeleine Yu, Reina Mizrahi, & Sarah Creel

Department of Cognitive Science, UC San Diego

+ more

Nonnative accents are commonplace, but why? Ample research shows that perceptual representations of second-language speakers are shaped by their first language. But how is production affected? If perceptual representations perfectly control motor production, or if speech perception occurs in terms of speech articulatory movements as on direct-realist or common-coding theories, then second-language speakers should understand their own speech accurately. To test this, we recorded 48 native Mandarin speakers labeling pictures in English. We then played back their own recorded productions (e.g. “lock”) as they chose one of four pictures (lock, log, shape, ship). They also heard a paired native English speaker. Words contained contrasts challenging for Mandarin speakers, principally coda voicing (lock, log) and similar-vowel (shape, ship) pairs. Listeners achieved 89% accuracy on both their own productions and native speakers', suggesting good matching between perception and production. However, errors were unevenly distributed: Mandarin speakers heard their own voiced codas (log) as voiceless (lock) more often than the reverse (10% vs. 5%). This mirrors a similar but larger voiceless bias in native-English listeners hearing accented stimuli, suggesting that Mandarin speakers’ coda voicing perception is more nativelike than their production. Ongoing work suggests that, on the one hand, listeners are specialized to recognize their own productions over others' productions, but on the other hand, that their production lags behind their perceptual representations. Production lag seems inconsistent with a strong version of common-coding theory. It is more consistent with other models (DIVA) in which listeners form auditory representations which then serve as an error signal to update speech motor plans.

2020-01-21

Audiovisual Integration and Top-Down Influences in Speech Perception

Dr. Laura Getz

Assistant Professor, Department of Psychological Sciences, University of San Diego

+ more

Organizing complex perceptual inputs in real time is crucial for our ability to interact with the world around us, and information received in the auditory modality in particular is central to many fundamental aspects of human behavior (e.g., spoken language, music perception). While classic views of perception hold that we absorb environmental information from our senses and translate these inputs into signals that the brain organizes, identifies, and interprets in a bottom-up fashion, there is a long-standing debate as to the degree to which top-down effects from higher-level processes such as emotions, actions, motivation, intentions, and linguistic representations directly influence perceptual processing. 
In this talk, I will present an overview of two lines of my work focusing on the importance of interactions, including the interaction of bottom-up and top-down processing and interactions within and across sensory modalities. I will show how interaction effects are important to speech perception and language processing, looking at a computational modeling approach to understanding the development of audiovisual speech integration (cf. Getz, Nordeen, Vrabic, & Toscano, 2017, Brain Sciences) and a cognitive neuroscience approach to investigating top-down lexical influences on basic speech encoding using the event-related potential technique (cf. Getz & Toscano, 2019, Psych Science). Together, these two projects help answer longstanding questions in cognitive science regarding the synergy between various levels of perceptual and cognitive processing.

2020-01-14

An Empirical Study on Post-processing Methods for Word Embeddings

Shuai Tang

Department of Cognitive Science, UC San Diego

+ more

Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems. Recently, a series of post-processing methods have been proposed to boost the performance of word embeddings on similarity comparison and analogy retrieval tasks, and some have been adapted to compose sentence representations. The general hypothesis behind these methods is that by enforcing the embedding space to be more isotropic, the similarity between words can be better expressed. We view these methods as an approach to shrink the covariance/gram matrix, which is estimated by learning word vectors, towards a scaled identity matrix. By optimising an objective in the semi-Riemannian manifold with Centralised Kernel Alignment (CKA), we are able to search for the optimal shrinkage parameter, and provide a post-processing method to smooth the spectrum of learnt word vectors which yields improved performance on downstream tasks.

2020-01-07

Bilingualism: The Mechanisms to Control Two Languages in One Mind

Chuchu Li

Department of Psychiatry, University of California, San Diego

+ more

Bilinguals seem to effortlessly control which language they speak, although studies show that both languages are activated even when only one language seems to be in use. Previous studies showed that bilinguals rely on inhibition to control activation of the language they don’t mean to speak, and to ensure successful communication. I will present a series of studies on bilingual language switching, that reveal how and when bilinguals rely on inhibitory control in speech production. Specifically, I consider how bilinguals manage activation of cognates, which may pose special challenges to bilingual control mechanisms, because of overlap in form (e.g., the Spanish word for lemon is limón). While cognates sometimes facilitate language switching, in other situations cognates magnify interference and make it more difficult for bilinguals to switch languages. Other aspects of the data including self-correction of errors reveal that when language selection is difficult bilinguals increase monitoring to enable eventual selection of the correct language, and that bilinguals can rely on context cues that enable language switches without reliance on inhibitory control. These findings illustrate an important but also varying role for inhibitory control in bilingual speech production. More broadly, this work provides threads to follow in the larger aim to understand which aspect of language processing are specialized for language versus reliant on general cognitive abilities including attention, inhibition, and other executive functions.

2019-12-03

Dynamic changes in bilingual performance: Long-term and short-term factors

Tamar Degani

University of Haifa
Department of Communication Sciences & Disorders
[Visiting scholar UCSD Psychology]

+ more

Bilinguals frequently shift between their languages across communicative settings, as when taking a test in one language after having talked on the phone in the other. A brief exposure of a few minutes to one language has been shown to influence subsequent bilingual performance in the other language (e.g., Kreiner & Degani, 2015), but the scope and generalizability of these effects are unclear. In a series of experiments with Arabic-Hebrew, Hebrew-English and Russian-Hebrew bilinguals we examined (1) whether different types of brief exposures (production vs. comprehension) affect subsequent performance in the same way; (2) what aspects of long-term bilingual experience modulate these short-term effects, and (3) whether the effects are limited to repeated lexical items or are extended across the whole language system. The results show that production in one language is slower and more error prone post exposure to another language. Further, exposure that includes active production leads to stronger effects, and lexical retrieval is more susceptible to the effect of brief language exposure than morpho-syntactic processes. Interestingly, both the first and the second languages are influenced, but the patterns of long-term interactional context modulate the scope of the effect. In particular, for bilinguals who routinely shift between their languages in their daily lives, brief exposure effects are not limited to repeated items, but are extended to the whole-language system. Together, these findings show that bilinguals’ performance is dynamically modulated by both long-term patterns of language use and short-term changes in language activation.

2019-11-19

Orthotactic sensitivity vs. phonological constraints on word recognition:
An ERP study with deaf and hearing readers

Brittany Lee

Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego<

+ more

A previous masked lexical decision study with hearing readers (Frankish and Turner, 2007) found that unpronounceable non-words yielded more false positives (i.e., were mistaken for words) compared to pronounceable non-words. Although orthography may play a role in these pronounceability effects, the authors opted for a phonological interpretation because dyslexic readers - with weak phonological decoding skills - showed no such effect. In this study we used event-related potentials to investigate pronounceability effects in skilled deaf and hearing readers. Twenty deaf readers and twenty hearing readers completed a masked lexical decision task. Non-words all contained an adjacent letter transposition and were rated as pronounceable or unpronounceable in a norming study. A lexicality effect was observed in both groups, with non-words eliciting a larger N400 than words. A stacked effect of pronounceability was observed in the hearing group, with pronounceable non-words eliciting the largest N400, followed by unpronounceable non-words, and then words. The deaf group showed a larger amplitude N400 to pronounceable non-words compared to both unpronounceable non-words and words, but no difference between unpronounceable non-words and words. These findings suggest differences in sensitivity to phonological constraints and possibly in the nature of orthographic representations for deaf and hearing readers.

2019-11-12

The learnability of syntactic generalizations from linguistic input: insights from deep learning

Roger Levy

MIT

+ more

What inductive bias is required to make syntactic generalizations from the input available to a learner is one of the central questions in the cognitive science of language, and a keystone problem for advancing natural language technology.  Due to the past decade's remarkable progress of deep learning and associated technologies, today’s recurrent neural network and attention-based Transformer models can now be trained on a human childhood’s or lifetime’s worth of linguistic input.  While these models are the state of the art in broad-coverage evaluations, our understanding of the syntactic generalizations they acquire remains limited. Here, we probe what structural generalizations these models implicitly acquire by treating them as “psycholinguistic subjects” in controlled experiments, ranging from tests for basic part-of-speech distinctions to syntactic hierarchy to incremental syntactic state and long-distance dependencies.  Our experiments reveal both impressive successes, such as learning not only unbounded filler-gap dependencies but also the hierarchy- and “island”-based restrictions on those dependencies, as well as striking limitations, such as failure in computing phrasal agreement features for coordinate structures. Our work offers new, linguistically-informed evaluation criteria for AI models of language, and places theoretically informative lower bounds on what features of natural language syntax may be learnable using domain-general mechanisms from linguistic input alone.

2019-11-05

Hearing Aids Research Instruments

Harinath Garudadri

Associate Research Scientist at the UC San Diego’s Qualcomm Institute of Calit2

+ more

In this talk, I present a wearable, open-source, realtime, speech processing platform (OSP) for research on hearing healthcare and associated disorders. Users’ experience with modern hearing aids is disappointing in many respects, including poor quality in noisy environments. There are at least two challenges for this: (i) understanding how the user’s hearing is compromised and (ii) pre-distorting the signal in under 10 ms, using limited battery to compensate for the hearing loss. With support from NIH (R01, R21/R33), ARL and NSF, we developed OSP http://openspeechplatform.ucsd.edu/ that includes hardware  based on smartphone chips, form factor accurate hearing aids, multi-channel EEG acquisition, and inertial motion sensors for physical activity assessment. The software includes an embedded web server hosting web apps for novel, multi-sensory, multi-modal psychophysical investigations beyond what is currently possible. My goal in giving this talk is to describe features of this platform and explore collaborative research topics.

2019-10-22

Why Do Homophones Exist?

Sean Trott

Department of Cognitive Science, UC San Diego

+ more

Human languages are rife with ambiguity. This is clearly evident in homophony––where two or more words sound the same, but carry distinct meanings. For example, the wordform “bank” could refer to the financial institution or the land along a river. The existence of homophony (and ambiguity more generally) appears puzzling at first glance: why would a system evolved for efficient, effective communication contain properties such as ambiguity? Part of the answer resides in the remarkable capacity of human comprehenders to disambiguate. But some accounts posit that ambiguity actually makes language more efficient by recycling “easy” (short, frequent, phonotactically well-formed) wordforms for multiple meanings (Piantadosi et al, 2012). In this talk, I ask whether the finding that short, phonotactically well-formed words are more likely to be homophonous can be attributed to a direct pressure to recycle these wordforms, or whether homophony is an emergent consequence of how lexicons are structured.

2019-10-15

Decoding inner speech from intracranial recordings in the human brain

Stephanie Martin

Halıcıoğlu Data Science Institute

+ more

Certain brain disorders limit verbal communication despite patients being fully aware of what they want to say. In order to help them to communicate, a few brain-computer interfaces have been proven useful, but relied on indirect actions to convey information, such as performing mental tasks like a rotating cube, mental calculus or movements attempts. As an alternative, we explored the ability to directly infer intended speech from brain signals. During this presentation, I will present my PhD work that aimed at decoding various continuous (e.g. acoustic) and discrete (e.g. words) features during inner speech, using electrocorticographic recordings in epileptic patients. I will also briefly discuss new projects that I plan to carry out during my postdoc at UCSD.

2019-10-08

Do we use verbal and visuospatial working memory in the comprehension of co-speech iconic gestures?

Seana Coulson

Department of Cognitive Science, UC San Diego

+ more

Speakers often move their hands and arms while they are talking to convey information about the referents of their discourse. While a number of researchers have explored the relationship between working memory resources and the production of these gestures, much less is known about the role of working memory in their comprehension. I'll discuss a series of behavioral and EEG experiments that address the importance of verbal and visuospatial working memory in the comprehension of co-speech iconic gestures. Come help me figure out where the hand waving begins

2019-06-04

Anything can be elided if you know how: clausal ellipsis without identity

Till Poppels

UCSD

+ more

Some of the most puzzling aspects of natural language arise from the interaction between linguistic and non-linguistic factors. In this talk I will consider one of those puzzles, a linguistic construction known as "sluicing," which allows speakers to leave entire clauses unpronounced:

(1) Joe was murdered but we don't know how he was murdered.

The central questions in the ellipsis literature are (i) how the meaning of elided material (marked like this) is conveyed given that it is not pronounced, and (b) what material can be elided in the first place. It is widely assumed that the grammar permits ellipsis only if the context provides an identical copy of the elided material, and, sure enough, violating this Identity Condition does appear to be problematic:

(2) # Joe was murdered but we don't know who murdered him.

Identity theories correctly predict (2) to be unacceptable (since murdered him is not identical to anything in the linguistic context), but I will argue that they do so for the wrong reasons. Drawing on evidence from a series of experiments, I show that sluicing is not, in fact, constrained by a strict Identity Condition and cannot be explained by considering the linguistic context alone. Instead, the elided material is inferred in a way that draws on both linguistic and non-linguistic knowledge, and even cases involving extreme mismatch can achieve high levels of acceptability in the right context. These observations about sluicing add to a growing body of evidence that the appearance of an identity constraint is epiphenomenal, and that ellipsis is best analyzed as form of discourse reference instead.

2019-05-28

Coordinating on meaning in communication

Robert Hawkins

Stanford University

+ more

How do we manage to understand each other through sensory channels? Human languages are a powerful solution to this challenging coordination problem. They provide stable, shared expectations about how the words we say correspond to the beliefs and intentions in our heads. However, to handle an ever-changing environment where we constantly face new things to talk about and new partners to talk with, linguistic knowledge must be flexible: we give old words new meaning on the fly. In this talk, I will present work investigating the cognitive mechanisms that support this balance between stability and flexibility. First, I will introduce an overarching theoretical framework of communication as a meta-learning problem and propose a computational model that formalizes the problem of coordinating on meaning as hierarchical probabilistic inference. Community-level expectations provide a stable prior, and dynamics within an interaction are driven by partner-specific learning. Next, I will show how recent connections between this hierarchical Bayesian framework and continual learning in deep neural networks can be exploited to implement and evaluate a neural image-captioning agent that successfully adapts to human speakers in real time. Finally, I provide an empirical basis for further model development by quantitatively characterizing convention formation behavior in a new corpus of natural-language communication in the classic Tangrams task. By using techniques from natural language processing to examine the (syntactic) structure and (semantic) content of referring expressions, we find that pairs coordinate on equally efficient but increasingly idiosyncratic solutions to the problem of reference. Taken together, this line of work builds a computational foundation for a dynamic view of meaning in communication.

2019-05-21

What makes a language easier to learn?
Experimental tests of the Linguistic Niche Hypothesis

Arturs Semenuks

UCSD

+ more

Are all languages equally complex? Research in typological linguistics suggests that languages differ in terms of their morphological complexity. Furthermore, it is often argued that languages with more L2 speakers tend to be morphologically simpler, suggesting that L2 speakers in some way cause the languages to become simpler. However, the evidence for the this comes mostly from qualitative and quantitative analyses of typological and diachronic data, as well as computational modelling. More in-lab experimental work is necessary if we want to (i) make causal claims about the influence of non-native speakers on language structure and (ii) fully understand the mechanism of that influence, i.e. how and why exactly does the presence of L2 speakers in the population lead to language simplification down the line.

The Linguistic Niche Hypothesis argues that the simplification process is caused primarily by, in the words of Peter Trudgill, “the lousy language learning abilities” of adults and that languages change to become more learnable for L2 speakers. In the talk I will present results from a series of experiments testing common assumptions of the mechanisms aiming to explain the morphological simplification process during L2 contact. In experiment 1, we test the assumption that imperfect learning leads to language simplification. Using an iterated artificial language learning setup, we find that imperfect learning does escalate the erosion of a complex, communicatively redundant feature in the language. In experiments 2-4, we test the assumption that descriptively simpler languages, specifically languages with more transparent form-to-meaning mappings, are also more learnable. Surprisingly, overall we don’t find evidence for that being the case, except for when the participants’ L1 morphological structure is also highly transparent. I examine the seeming tension between the results of the experiments, argue that descriptively simpler languages are not always easier to learn and conjecture when they are, and discuss what the results suggest for the previously proposed mechanisms of language simplification.

2019-05-14

Lexical and sublexical factors that influence sign production:
Evidence from a large scale ASL picture-naming study

Zed Sevcikova Sehyr

San Diego State University

+ more

The mental lexicon exhibits structure that reflects linguistic patterns and affects language processing. Understanding and documenting these structural patterns is key to answering central linguistic and psycholinguistic questions. Picture-naming tasks have provided critical data for theories of language representation and production (Levelt, Roelofs, & Meyer, 1999), and picture-naming has been performed successfully with sign languages (Baus, Gutierrez-Sigut, Quer, & Carreiras, 2008; Emmorey, Petrich, & Gollan, 2013). However, large normative picture-naming databases suitable for use with sign languages are lacking. Moreover, the specific influences of lexical and sublexical factors on sign processing remain largely unexplored. Previous picture-naming studies with sign languages revealed effects of subjective frequency (Emmorey et al., 2013), but phonological complexity has not been found to influence naming times (Vinson, Thompson, Skinner, & Vigliocco, 2015). Sign iconicity may facilitate naming times, but only for late-learned signs (Vinson et al., 2015). However, it remains unclear how lexical or sublexical properties of signs influence naming latencies across a large set of signs and how these variables interact with each other. The aims of this study were 1) to determine the effects of lexical and phonological properties of signs (e.g., lexical class, frequency, phonological density, sign handedness, and iconicity) on picture naming times in ASL, 2) to compare our data with spoken language picture-naming databases (Bates et al., 2003; Snodgrass & Vanderwart, 1980; Szekely et al., 2003), and 3) to establish a normative database of pictures that correspond to specific signs that can be used by researchers and educators. Twenty-one deaf ASL signers named 524 black and white line drawings from Bates et al. (2003). There were 252 images that depicted actions and 272 images depicted objects. A total of 10,856 trials were recorded. Naming accuracy was 83%. For action naming, accuracy was significantly lower and RTs were longer (77%; 1247 msec) than for object naming (88%, 910 msec), which parallels the pattern found for spoken languages. Pictures depicting actions yielded greater diversity of responses than pictures depicting objects (H stat = .59 and .35, respectively). Pictures with a larger number of alternative names elicited longer RTs, pointing to lexical competition effects during sign production. Further, regression analyses examining the effects of lexical frequency, phonological properties, and iconicity on naming times revealed that higher lexical frequency lead to faster RTs, and both iconicity and neighborhood density predicted a small variance in naming RTs. Two-handed productions resulted in longer RTs than one-handed productions, further suggesting that phonological complexity impacts sign retrieval. A standardized set of pictures together with the ASL normative data will be available online via an interactive database. In future work, the pictures and naming data could be used to create an ASL vocabulary assessment test for use with children or adults.

2019-05-07

Electrophysiological Studies of Visual and Auditory Processing in Deaf Children with Cochlear Implants

David Corina

UC Davis

+ more

Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians’ best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. In this talk I will present preliminary data from an on-going longitudinal study that uses a novel EEG paradigm to measure the development of auditory, visual and linguistic abilities in children with cochlear implants. I discuss the claims of maladaptive cross-modal plasticity as a limiting factor in auditory language development.

2019-04-30

Acquiring and predicting structure via statistical learning

Jenny Saffran

University of Wisconsin-Madison

+ more

Infant learners are sensitive to myriad statistical patterns in their environment. These regularities facilitate the acquisition of a range of representations and structures. They also facilitate the generation of expectations and predictions about the world. In this talk, I will describe a diverse array of infant studies, primarily focused on language, that examine the role of prior learning in the generation of expectations in downstream processing. Implications for atypical development will also be considered.

2019-04-23

The Aboutness of Language: A Critique of Berwick and Chomsky

Michael Arbib

Adjunct Professor of Psychology, UCSD
University Professor Emeritus, USC

+ more

Alas, the book Why Only Us: Language and Evolution by Berwick and Chomsky (2015) is marred by unwarranted assumptions, faulty logic and snide rhetoric -- and its answer to "Why Only Us" reduces to "Because we had a mutation that no one else had," which is hardly satisfying.

I offer an alternative framework based on the key notion of Aboutness: Language evolved for communication about the actual and imagined worlds. Since I have talked to CRL on this before, I will mention comparative neuroprimatology only briefly, to note that the brains and behaviors of great apes and monkeys provide clues to "why us?" (Arbib et al., 2018). The "only" is a separate issue.

 

References:

Arbib, M. A., Aboitiz, F., Burkart, J., Corballis, M., Coudé, G., Hecht, E., . . . Wilson, B. (2018). The Comparative Neuroprimatology 2018 (CNP-2018) Road Map for Research on How the Brain Got Language. Interaction Studies, 19(1-2), 370-387.

Berwick, R. C., & Chomsky, N. (2015). Why Only Us: Language and Evolution. Cambridge, M.A.: The MIT Press.

2019-04-16

Similarity as a tool for understanding the word production system

Bonnie Nozari

Johns Hopkins University

+ more

Despite their differences, nearly all models of word production agree that related representations are activated during the process of producing a word. Much less agreed upon are the consequences of such co-activation. Specifically, while semantic similarity is thought to primarily interfere with production, phonological similarity is thought to facilitate production. In this talk I will argue that such a position is equivalent to assuming fundamentally different principles for the two stages of word production (lexical retrieval and phonological encoding). I will then provide converging evidence from neurotypical speakers and individuals with aphasia against this position. Finally, I will discuss the implication of some recent EEG data for understanding the cognitive processes underlying the co-activation of semantically- and phonologically-related items during word production.

2019-04-09

A Bayesian framework for unifying data cleaning, source separation and imaging of electroencephalographic signals

Alejandro Ojeda

UCSD

+ more

Electroencephalographic (EEG) source imaging depends upon sophisticated signal processing algorithms for data cleaning, source separation, and localization. Typically, these problems are addressed by independent heuristics, limiting the use of EEG images on a variety of applications. Here, we propose a unifying parametric empirical Bayes framework in which these dissimilar problems can be solved using a single algorithm (PEB+). We use sparsity constraints to adaptively segregate brain sources into maximally independent components with known anatomical support, while minimally overlapping artifactual activity. Of theoretical relevance, we demonstrate the connections between Infomax ICA and our framework. On real data, we show that PEB+ outperforms Infomax for source separation on short time-scales and, unlike the popular ASR algorithm, it can reduce artifacts without significantly distorting clean epochs. Finally, we analyze mobile brain/body imaging data to characterize the brain dynamics supporting heading computation during full-body rotations, replicating the main findings of previous experimental literature.

2019-04-02

Two ways of looking at fingerspelled borrowings in American Sign Language

Ryan Lepic

University of Chicago

+ more

American Sign Language is in considerable contact with English. One outcome of this contact is that ASL signers use fingerspelling to borrow English words into ASL. Previous studies have demonstrated the important role that fingerspelling plays in ASL, however, fewer studies have examined the distribution or function of fingerspelled words in ASL usage. The present study describes two related datasets for studying the use of ASL fingerspelling. The first dataset is a list of 1,050 ASL translations of English compounds, collected through a translation elicitation task. The second dataset is a corpus of 893 fingerspelled words collected from approximately one hour of an ASL news show. These complementary datasets provide a way to study the types of concepts that are typically fingerspelled in ASL, in addition to the variation in the use of fingerspelling in ASL.

2019-03-12

Relativization strategies in Turkish Sign Language: understanding the varying strategies

Okan Kubus

+ more

The main focus of this talk is the recent investigations on relative clause constructions (RCCs) in Turkish Sign Language (Türk İşaret Dili - TİD). I first introduce relativization strategies that have been identified in different sign languages in literature. Next, I discuss RCCs in TİD. In particular, I present several relativization strategies and their frequency of distribution in a small- scale corpus consisting of narrative data (Kubus 2016). Signers of TİD prefer both internally headed and externally headed relativization strategies. They use non-manuals (i.e. facial expressions including squinted, brow raise or slight-headshake) as well as manual signs (i.e. clause final INDEX sign, AYNI ‘same’ or different combinations of them) for marking relative clauses in TİD. I also discuss the dynamics of the variation of these markers with respect to local and global aspects of discourse structure, i.e. the position of head nouns (Kubus & Nuhbalaoğlu, 2018a) and the information status of head nouns (Kubus & Nuhbalaoğlu, 2018b). The ongoing empirical investigation of relativization strategies in TİD at different levels suggests that some markers of RCC's in TID are in the process of grammaticalization. Finally, I will also discuss various challenges in investigating relative clause constructions in sign languages.

Kubus, O. (2016) Relative clause constructions in Turkish Sign Language.Ph.D dissertation, University of Hamburg. http://ediss.sub.uni-hamburg.de.
Kubus, O. & D. Nuhbalaoğlu (2018a) Position of the head noun makes
a difference: varying marking of relative clauses in Turkish Sign Language. Sign CAFÉ 1, University of Birmingham, Birmingham, July 30th-31st 2018.
Kubus, O. & D. Nuhbalaoğlu (2018b) The challenge of marking relative clauses in Turkish Sign Language. Dilbilim Araştırmaları Dergisi [eng.: Linguistic research journal] 29 (1), 139–160. doi: 10.18492/dad.373454.

2019-03-05

How to find the rabbit in the big(ger) box:
Reasoning about contextual parameters for gradable adjectives under embedding

Helen Aparicio

MIT

+ more

Haddock (1987) noticed that the definite description ‘the rabbit in the hat’ succeeds in referring even in the presence of multiple hats, so long as only one hat contains a rabbit. These complex definites suggest that uniqueness with respect to the NP hat is not required in such embedded contexts, raising the question of what the correct formulation of the uniqueness condition for definite determines is. Generally speaking, two types of solutions have been proposed to this puzzle. The first one postulates a complex semantic representation for definite determiners, where uniqueness can be checked at different points of the semantic representation for either sets of hats or sets of rabbit-containing hats (Bumford 2017). The second type of account proposes that definite descriptions can be evaluated against a sub-portion of the maximally available context (Evans 2005; Frazier 2008; Muhlstein 2015). This pragmatic mechanism ensures that reference resolution is successful, even when the maximal context would violate the uniqueness presupposition of the definite article.

The present work seeks to tease apart these two classes of theories by investigating the interpretive preferences for similarly embedded noun phrases containing a positive or comparative adjective (e.g., ‘the rabbit in the big/ger box’). Experimental results show that embedded positive adjectives exhibit a sensitivity to contextual manipulations that embedded comparatives lack. We derive this sensitivity using a probabilistic computational model of the contextual parameters guiding the interpretation of the embedded NP, and compare it to alternative models that vary in the lexical representations assumed for definite determiners. Our simulation results show that neither of the two proposals under consideration can independently account for all of the observed experimental results. We show that the model that best matches human data is one that combines both a complex uniqueness check (à la Bumford) with pragmatic context coordination.

2019-02-26

Is there evidence for word form prediction during sentence comprehension?

Thomas Urbach

UCSD

+ more

In this talk, I'll provide some background on this question about prediction, why it is being revisited now, and present some new findings from current work in progress using a novel analytic approach. First, I'll outline why the question is important and summarize two experiments that used similar designs and measurements of event-related brain potentials (ERPs) but came to different conclusions: Yes (DeLong, Urbach, and Kutas 2005) and No (Nieuwland et al 2018). I'll then present an overview of regression-ERPs (rERPs), a relatively new EEG modeling framework articulated and motivated in Smith and Kutas (2014) and demonstrate how it can overcome limitations of previous efforts to answer the question at hand by analyzing, de novo, EEG data from the 2005 report and two closely related (2004-5, 2010) studies. I will compare models of these continuous single trial EEG data as linear-mixed effects rERPs with different random effects structures. These analyses model the word-form prediction effect in the EEG data as a continuous variable evolving over time. The patterns of rERPs indicate the prediction effects do emerge, when hypothesized and not before. These findings provide a novel and temporally fine-grained Yes answer. I'll finish by stepping away from this particular question to discuss, more generally, how the regression ERP framework presents new solutions and raises new problems for using electrophysiological measures to answer questions of interest to language researchers.

2019-02-19

Memory, Locality, and Word Order Universals

Richard Futrell

UC Irvine

+ more

I explore the hypothesis that word order universals of human languages can be explained in terms of efficient communication that minimizes language processing costs. First, I show corpus evidence from 54 languages that word order in grammar and usage is shaped by working memory constraints in the form of dependency locality: a pressure for syntactically linked words to be close to one another in linear order, which can explain Greenberg’s harmonic word order universals and also certain exceptions to them. Second, I present a general information-theoretic model of working memory cost in language processing, in which comprehenders can strategically invest working memory resources for the benefit of being able to predict upcoming words more accurately. The resulting model favors languages that exhibit information locality, meaning that predictive information about a word is concentrated in the recent past before the word. I show corpus evidence that word order grammars have this property, and show that dependency locality can be derived as a special case of the more general information locality.

2019-02-05

Responsivity in Early Parent-Child Conversations

Silvia Nieva

Complutense University of Madrid

+ more

Previous studies have highlighted the relevance of spontaneous verbal imitations in language development (Cheng et al, 2018; Schawab et al, 2018; Snow, 1981; Masur 1995; Masur & Eichorst, 2002). The present study is a longitudinal analysis of the spontaneous verbal repetitions of 18 children from Spain, ages 1;9 to 2;5 years, and their parents, during natural parent-child interactions. Interactions between two variables (age and participant) were examined, as well as the type of repetition (exact, expanded and reduced). The results revealed age-related changes and interactions between participant (child or adult), age, and the type of repetition. Methodological issues and the clinical relevance of the results will be discussed.

2019-01-29

The pragmatics of depiction

Kathryn Davidson

Harvard University

+ more

The use of descriptive verbal content is known to be governed by pragmatic principles that expect increased informativity for increased effort; in contrast, depictive content remains largely an unexplored pragmatic domain, despite a recent surge of interest in how depictive content (signed, spoken, and gestured) composes in natural language semantics. In this talk I’ll focus primarily on one type of depictive content (English co-speech gestures) in a series of three experimental studies that dissociate triviality (duplicating content) from informativity (is the content crucial to the task). One goal is to understand the wide variability in acceptability of co-speech depictive gestures in experimental linguistic studies; another is a more general understanding of how depiction (in sign, speech, and gesture) is integrated within a system of pragmatic communicative principles. I will end with connections to the linguistic structure of bimodal bilingualism based on how users with access to language in two full modalities make use of depictive content.

2019-01-22

Construction in General, and Music and Language in Particular

Michael Arbib

Adjunct Professor of Psychology, UCSD
University Professor Emeritus, USC

+ more

Preview of a talk for the Spring School 2019, Language and Music in Cognition: Integrated Approaches to Cognitive Systems, February 2-8, 2019, University of Cologne, Cologne, Germany

My last presentation to the UCSD CRL Seminar (November 14, 2017) was a preview of “How the Brain Got Language: The Comparative Primatology (CNP-2018) Road Map” (Arbib et al., 2018). This talk has minor overlap, linking language & music within a broader framework of “construction.” The music theme will build on key ideas from Language, Music and the Brain: A Mysterious relationship (Arbib, 2013b). A theme uniting both projects was that neither language nor music should be considered as purely vocal systems -- consider language and gesture or music and dance, for example. Another theme extends Bjorn Merker’s view (Merker, 2015) of both music and language as being “Humboldtian systems,” that is, systems in which unlimited pattern diversity is generated through combinations among a finite (typically small) set of non-blending elements (Von Humboldt, 1836). Developing a view of language rooted in Construction Grammar, a key issue will be to reassess ideas for what constitutes syntax and semantics in music. A complementary idea, inspired by recent work linking architecture and neuroscience (Arbib, 2013a), will be to generalize Humboldtian systems to offer a framework that begins to extend the notion of construction in language to the human capacity for construction more generally by removing the restriction “typically small” from Merker’s definition of a Humboldtian system (Arbib, 2019).

2019-01-15

Unifying parsing and generation in filler-gap dependency formation

Shota Momma

UCSD

+ more

In this talk, I will advance the view that the same syntactic structure building mechanism is shared between parsing (in comprehension) and generation (in production), specifically focusing on filler-gap dependencies. Based on both existing and novel experimental data, I will argue that both comprehenders and speakers anticipatorily represent (i.e., predict and plan) the gap structure, soon after they represent the filler and before representing the words and structures that intervene the filler and the gap. I will discuss the basic properties of the algorithm for establishing filler-gap dependencies that we hypothesize to be shared between comprehension and production, and suggest that it resembles the derivational steps for establishing long-distance dependencies in a well-established grammatical formalism, known as Tree Adjoining Grammar.

2019-01-08

The Effect of Representational Richness on Memory Retrieval during Referential Processing

Hossein Karimi

Penn State

+ more

Language processing necessarily relies on memory of the immediate past; previously encoded information needs to be retrieved to incorporate new information successfully and efficiently. One of the areas in which the role of memory retrieval is prominent is referential processing where one or more referential candidates are initially encoded and then are subsequently retrieved when a referring expression (such as a pronoun) is encountered. In this talk, I will report the results of four studies investigating whether and how modification (i.e., representational richness) of referents affects their subsequent retrieval. In the first two studies, we varied the amount of extra information attached to potential referents, producing representationally rich (e.g., the actor who had recently won an Oscar award) and bare referential candidates (e.g., the actor). We then measured the choice between different forms of referring expressions (e.g., pronouns vs. repeated nouns) during language production (study 1), and looking probability in the Visual World (study 2) and observed that representational richness facilitates the retrieval of associated memory items. In the third study, we examined the effect of modifier position on subsequent retrieval (pre-modified NPs: The cruel king vs. post-modified NPs: The king who was cruel) and found that post-modifiers result in greater retrieval facilitation relative to pre-modifiers, suggesting that memory encoding is more efficient when the head noun is encoded before the modifying information. In the fourth study, we showed that the representational richness effect could arise with sheer passage of time, namely, without any modifying information: When the processer merely “stays with” a representation for some time, the encoding becomes more robust and retrieval is facilitated. These results have important implications for memory-based as well as functional theories of language processing.

2018-12-04

Mentalizing predicts perspective-taking during indirect request comprehension

Sean Trott

UCSD

+ more

People often speak ambiguously, as in the case of indirect requests. Certain indirect requests are conventional and thus more straightforward to interpret, such as “Can you turn on the heater?”, but others require substantial additional inference, such as “It’s cold in here”. How do comprehenders make inferences about a speaker’s intentions? And how do comprehenders vary in the information they draw on to make these inferences?

Here, we explore the hypothesis that comprehenders infer a speaker's intentions by mentalizing––sampling what the speaker knows or believes about the world, and deploying this information to adjudicate between competing interpretations of the same utterance (e.g. "It's cold in here"). In Experiment 1, we find that a speaker's inferable knowledge state influences pragmatic interpretation, but that the extent to which it does so varies across comprehenders. In Experiments 2-3, we find that individual differences in mentalizing, as measured by the Short Story Task (Dodell-Feder et al, 2013), explain some variability in participants' likelihood to integrate a speaker's perspective into their pragmatic interpretation. In Experiments 4-5, we ask whether this variance arises because of information loss during sampling (e.g. mental state inference), deployment (e.g. using mental state information for downstream pragmatic inference), or both.

2018-11-29

A common representation of serial position in language and memory

Simon Fischer-Baum

Rice University

+ more

Speaking, spelling, and serial recall require a sequence of items to be produced one at a time in the correct order. The representations that underlie this ability, at a minimum, contain information about the identity and position of the items in the sequence. I will present a series of studies that investigate whether similar principles underlie how position is represented in different cognitive domains – spelling, speaking, verbal short-term-memory. In each domain, a variety of hypotheses, or position representation schemes, have been proposed for how the position of an item represented.  Careful analysis of the patterns of errors produced by neuropsychological case studies and unimpaired adults in a range of tasks support a common representational system for representing item position, with sequence edges playing a key role in encoding item position. Specifically, our analyses have found that each item’s position is represented both by its distance from the start of the sequence and its distance from the end of the sequence. The same scheme was supported over a wide range of alternative hypotheses when investigating the spelling errors produced by individuals with acquired dysgraphia, the phoneme substitution errors produced by individuals with aphasia and the protrusion errors produced during immediate serial recall for both hearing and deaf participants. The fact that a similar scheme is used to represent position across a range of cognitive domains suggests that serial order processing may rely on some domain-general representational principles.

2018-11-27

Gesture, Language, and Thought

Sotaro Kita

University of Warwick

+ more

This presentation concerns a theory on how gestures (accompanying speaking and silent thinking) are generated and how gestures facilitate the gesturer's own cognitive processes. I will present evidence that gestures are generated from a general-purpose Action Generator, which also generates “practical” actions such as grasping a cup to drink, and that the Action Generator generates gestural representation in close coordination with the speech production process (Kita & Ozyurek, 2003, Journal of Memory and Language). I will also present evidence that gestures facilitate thinking and speaking through four functions: gesture activates, manipulates, packages and explores spatio-motoric representations (Kita, Chu, & Alibali, 2017, Psychological Review). Further, I will argue that schematic nature of gestural representation plays a crucial role in these four functions. To summarise, gesture, generated at the interface of action and language, shapes the way we think and we speak.

2018-11-20

Hearing aids research instruments

Harinath Garudadri

UCSD

+ more

My work since coming to UCSD 5 years ago has been exclusively on technologies for healthcare; this talk is specifically on improving hearing healthcare. A lot of academic research on hearing loss (HL) assessment and intervention is carried out with proprietary hardware and software (black boxes) from hearing aid (HA) manufacturers.  Supported by NIH/NIDCD R01 grant, we have developed a realtime, wearable, open-source speech-processing platform (OSP) http://openspeechplatform.ucsd.edu/ that can be used by researchers to investigate advanced HA algorithms in lab and field studies. The wearable version of OSP is based on smartphone chipsets and it is about 1/3rd the volume and weight of a smartphone. In addition the basic and some advanced HA features, OSP provides multi-channel, high resolution EEG acquisition, synchronously with audio.  It includes an embedded web server, so that users can monitor and control the HA state; conduct audiological studies such as AB comparisons; collect self-appraisal data for ecological momentary assessments (EMA), etc., all from any browser enabled device. My goal in giving this talk is to describe features of this platform and explore collaborative research topics.

Bio: Harinath Garudadri is an Associate Research Scientist at the UC San Diego’s Qualcomm Institute of Calit2. He joined UCSD after 26 years in the industry, including 16 years at Qualcomm. He has a PhD in electrical engineering (1988) from University of British Columbia, Vancouver, B.C. where he spent half his time in ECE and the other half in School of Audiology and Speech Sciences, Faculty of Medicine. His industry contributions have been incorporated into cell phones and commercial networks. Hari has 42 granted patents and over 18 pending patents in biomedical signal processing and related areas. Hari has over 43 peer-reviewed publications. His contributions were incorporated in 14 international standards specifications.

2018-11-13

Assessing the contribution of lexical quality variables to reading comprehension
in deaf and hearing readers

Zed Sevcikova Sehyr

San Diego State University

+ more

Reading builds on a spoken language foundation and the quality of phonological forms plays a central role in reading development for hearing readers. Skilled deaf readers however are likely to have relatively coarse-grained phonological codes compared to hearing readers due to reduced access to spoken language. In what ways does the reading system successfully adapt for adult deaf readers? The Lexical Quality Hypothesis proposes that variations in the quality of word representations has consequences for reading comprehension (Perfetti, 2007). We evaluated the relative contribution of the lexical quality (LQ) variables – orthographic (spelling), phonological, and semantic (vocabulary) knowledge – to reading comprehension in adult deaf ASL signers and hearing nonsigners using regression models to predict reading skill (PIAT and Woodcock Johnson (WJ) comprehension subtests). We hypothesized that the primary predictor of reading comprehension for deaf readers lies in the quality of orthographic representations and robust orthographic-to-semantic mappings, whereas the quality of phonological representations will be strongly predictive of reading comprehension for hearing readers. The preliminary results revealed that for deaf readers, LQ variables predicted 28% of the variance in PIAT scores (after eliminating covariates such as nonverbal reasoning skills) and 18% of the variance in WJ scores. Semantics and orthography, not phonology, predicted reading comprehension for deaf readers. For hearing readers, LQ variables predicted 14% of variance in PIAT scores and 56% in WJ scores. Phonology was the strongest predictor of reading comprehension (with semantics also predicting WJ scores). We conclude that 1) strong orthographic and semantic representations, rather than precise phonological representations, predict reading skill in deaf adults and 2) the predictive strength of LQ variables may depend upon how reading comprehension is measured.

2018-11-06

Are you committed or just interested?
Degrees of prediction in the lexical and syntactic domains

Aya Meltzer-Asscher

Tel Aviv University

+ more

Most theories agree that predictive processing plays an important role in sentence comprehension. However, it is possible that not all predictions have the same consequences. In this talk, I will provide behavioral and electrophysiological evidence for different degrees of commitment to prediction; while some predictions merely facilitate processing of the predicted content upon its occurrence, others entail proactive, predictive structure building and/or semantic interpretation. Accordingly, failed predictions are also processed differently in the two cases. I will show that the degree of predictive processing depends on the sentence context (both semantic and syntactic), as well as on the comprehender.

2018-10-30

Brain dynamics supporting lexical retrieval in language production

Stéphanie K. Riès

San Diego State University

+ more

Lexical retrieval is the process by which we activate and select lexical representations as we speak. Several brain regions have been proposed to be engaged in lexical retrieval, including subregions of the left prefrontal cortex (LPFC) and left temporal cortex (LTC). However, the precise role of these brain regions in lexical activation and selection and how they interact to support lexical retrieval are largely unknown. I will present results from intracranial electrophysiological studies and neuropsychological studies beginning to shed light on these issues. These results support the hypotheses that the posterior LTC can support both lexical activation and selection, and that the left PFC becomes necessary for lexical selection when interference between semantically-related alternatives is increased. Using evidence accumulation modelling of reaction time distributions suggests that different computational mechanisms are affected when patients with LPFC versus LTC stroke-induced lesions perform lexical retrieval. In particular, lesions to the LTC impact the adjustment of both the rate of evidence accumulation and decision threshold, whereas lesions to the LPFC impact decision threshold adjustment only. Finally using a graph inference method with iEEG data acquired during picture naming, we show that the left middle frontal gyrus is functionally connected to the left inferior temporal gyrus and ventral temporal cortex and that the nature of the connectivity between these regions changes depending on semantic context and repetition. In particular, connectivity patterns between these subregions of the LPFC and pLTC are overall denser (more electrode pairs connected) in situations of increased semantic interference and become sparser with increasing repetitions.

2018-10-23

The effect of emotional prosody on word recognition

Seung Kyung Kim

UCSD

+ more

The focus of work on phonetic variation in spoken language processing has been mostly on the mapping of the variable signal to sounds and words, with much less focus on the role of phonetically cued social/talker variation. In this talk, I present cross-modal priming studies investigating the effect of phonetically cued emotional information (i.e., emotional prosody) on spoken word recognition. The results show two main ways emotional prosody can influence the spoken word recognition process. First, the content of emotional prosody (such as anger or happiness) can directly activate corresponding emotional lexical items, independent of a lexical carrier. Second, emotional prosody can modulate semantic spreading of the lexical carrier. The pattern of semantic spreading are not due to the content of emotional prosody per se, but probably due to increased attention allocated to emotional prosody. This work offers a new approach to spoken word processing that embraces the social (broadly construed) nature of spoken language processing.

2018-10-16

Origins and functions of music in infancy

Samuel Mehr

Harvard University

+ more

In 1871, Darwin wrote, “As neither the enjoyment nor the capacity of producing musical notes are faculties of the least use to man in reference to his daily habits of life, they must be ranked among the most mysterious with which he is endowed.” Infants and parents engage their mysterious musical faculties eagerly, frequently, across most societies, and for most of history. Why should this be? In this talk I propose that infant-directed song functions as an honest signal of parental investment. I support the proposal with two lines of work. First, I show that the perception and production of infant-directed song are characterized by human universals, in cross-cultural studies of music perception run with listeners on the internet; in isolated, small-scale societies; and in infants, who have much less experience than adults with music. Second, I show that the genomic imprinting disorders Prader-Willi and Angelman syndromes, which cause an altered psychology of parental investment, are associated with an altered psychology of music. These findings converge on a psychological function of music in infancy that may underlie more general features of the human music faculty.

2018-10-09

Mapping Functions for Multilingual Word Embeddings

Ndapa Nakashole

Computer Science and Engineering, UCSD

+ more

Inducing multilingual word embeddings by learning a linear map between embedding spaces of different languages achieves remarkable accuracy on related languages. However, accuracy drops substantially when translating between distant languages. Given that languages exhibit differences in vocabulary, grammar, written form, or syntax, one would expect that embedding spaces of different languages have different structures especially for distant languages. I will present our work on understanding the behavior of linear maps learned by word translation methods. Additionally, I will present some initial solutions to the shortcomings of such linear maps.

Speaker Bio:

Ndapa Nakashole is an Assistant Professor  in the Department of Computer Science and Engineering at the University of California, San Diego. Prior to UCSD, she was a Postdoctoral Fellow in the Machine Learning Department at Carnegie Mellon University. She obtained her PhD from Saarland University, Germany, for research carried out at the Max Planck Institute for Informatics in Saarbrücken. She completed undergraduate studies in Computer Science at the University of Cape Town, South Africa.

2018-10-02

The Online-processing of Long Distance Dependency Formation:
its Mechanisms and Constraints

Nayoun Kim

Northwestern University

+ more

Resolving long-distance dependencies involves linking the dependent element to the controlling element. In the case of Wh-gap dependency formation (WhFGD), the wh-element is linked to the gap.  In the case of ellipsis resolution, the ellipsis site is linked to the antecedent. In the processing of long-distance dependency resolution, it is plausible to assume that two component processes are involved: the storage/maintenance component and the retrieval component (Gibson 98). Our studies attempt to reveal the mechanism working behind online WhFGD formation and antecedent retrieval processes in Noun Phrase Ellipsis (NPE). Specifically, we aim to uncover how the maintenance component and the retrieval component interact (Wagers & Phillips 14) by paying special attention to what information associated with the wh-filler is retrieved once a gap is recognized. We contend that (i) the filler is released from memory, depending on the grammatical requirement of the filler; (ii) given that information associated with the filler being retrieved reflects the extent to which the filler is maintained, the parser retrieves grammatical information associated with the wh-filler; and (iii) the parser is sensitive to grammatical distinctions at the ellipsis site in contrast to the processing of anaphoric one.

2018-06-05

Knowledge of a syntactic universal guides generalization to new structures

Adam Morgan

UCSD Psychology

+ more

A perennial debate among language scientists is to what degree language learning depends on the input.  Strict nativists argue that at some level of abstraction, language is hardwired into the human genome and input only serves to help learners decide which of the genetically endowed grammars belongs to their language.  Empiricists rely more heavily on experience, suggesting that language learning involves domain-general cognitive mechanisms abstracting away from patterns in the input in order to construct the grammar.  In four experiments, we first ask whether monolingual English speaking adults who are exposed to a subset of a novel grammar can generalize to other, previously unencountered structures of that grammar.  We show that they do indeed produce structures that they have not yet been exposed to.  We then ask what kind of knowledge guides their generalization to the novel structures: language-specific knowledge (i.e., knowledge of English) or language-general knowledge (whether that be innate knowledge of Universal Grammar or constraints imposed by limited general cognitive resources).  We show (1) that participants generalize to novel structures only when those structures are predicted by a universal property of languages, and (2) that this is true even when the resulting grammar does not behave like English.  Our findings therefore suggest that adult language learning does not rely exclusively on the input, and furthermore, whatever pressures give rise to universal patterns across languages also appear to guide learning in adult language learners. That is, synchronic language learning recapitulates diachronic language evolution.

2018-05-29

Two languages or one: Bilingual language acquisition

Reina Mizrahi

UCSD Cognitive Science

+ more

Currently at least 1 in 5 people in the US speak a language other than English and the number of children growing up in bilingual homes is large and rising (Census Bureau 2015). Yet, most theories of language acquisition are based on monolingual children leaving a number of unanswered questions about language development in a large segment of the population. One major question in language development is when and how young language learners can identify the language(s) they are hearing. Depending on whether a child is bilingual or monolingual, being able to correctly identify the language being heard may be critical for language learning, comprehension, and appropriate language use in a given context. A relatively-untested idea, which we assess here in a new eye-tracking paradigm, is that children keep languages separate by associating individuals with particular languages. We ask here whether the language someone speaks can serve as a cue for talker identification, and whether bilinguals are especially skilled at associating their two languages with different talkers. The current data suggests that language is a salient vocal feature for talker identification across language groups. Current and future findings can shed light on when and how during development bilingual children keep separate representations of the languages they speak, and the information in spoken language that allows them to do so.

2018-05-22

How quickly do speakers decide which word to say next?

Dan Kleinman

Beckman Institute, University of Illinois at Urbana-Champaign

+ more

How long does it take to retrieve a word’s representation in long-term semantic memory when preparing to speak? Recent EEG studies using language production tasks have found that the posterior P200 component has a greater amplitude when lexical selection is more difficult, leading to claims that speakers initiate lexical access within 200 ms. However, ERP studies from other domains have largely converged on the conclusion that it takes at least 300 ms to access the representation of a stimulus in long-term semantic memory. One possible reason for this discrepancy is that production P200 studies have exclusively used picture naming tasks, which afford a single correct response that must be produced without a broader context. To determine whether such factors limit the generalizability of these findings, we investigated the relationship between P200 amplitude and lexical selection difficulty using a sentence completion (cloze) task, which affords many acceptable responses and a semantically rich context.

In two experiments, we recorded subjects’ EEG (Exp. 1: n=40; Exp. 2: n=20) as they read 240 RSVP sentences that varied in constraint (how strongly the context predicted the final word). On 50% (Exp. 1) or 100% (Exp. 2) of the trials, the last word of the sentence was omitted and subjects instead saw a blank, prompting them to overtly produce a completion. (The remaining sentences in Exp. 1 were completed with a visually presented sentence-final word that was either expected or unexpected – a standard N400 elicitation task.) Analyzing responses at the level of individual trials, we found a significant relationship between RT and P200 amplitude at central and posterior sites in both experiments, but – crucially – in the opposite direction from that previously observed: A larger P200 was associated with faster RTs.

These data show that the relationship between posterior P200 amplitude and lexical selection difficulty is task-dependent. We propose that the P200 indexes the recruitment of attentional resources during word production, but that more attention may be associated either with increased difficulty (as when subjects recruit extra cognitive resources in anticipation of naming difficulty) or with better preparation (as when subjects make better use of context to formulate a response). Importantly, under this account, the posterior P200 is not sensitive to the activation of specific lexical items. Thus, we think it is premature to conclude from production P200 experiments that speakers initiate lexical access within 200 ms.

2018-05-15

Word recognition in deaf sign-print bilinguals: A comparison of children and adults

Agnes Villwock

UCSD Linguistics
NSF Science of Learning Center in Visual Language & Visual Learning (VL2)

+ more

Traditionally, most studies on bilingualism have investigated the acquisition and usage of each language separately. However, more recent studies have shown that, even if only one language is overtly present, bilinguals seems to co-activate both – for example, during reading (Dijkstra, 2005), and listening (Marian & Spivey, 2003), but also while producing speech (Kroll, Bobb, & Wodniecka, 2006). Besides unimodal bilinguals, some individuals are bilingual in languages with different modalities: Deaf individuals tend to be users of a signed and a spoken language. In order to investigate crossmodal co- activation in this population, we presented written words in semantic relatedness tasks (English, Morford et al., 2011; German, Kubus, Villwock, et al., 2015) to highly proficient deaf sign-print bilinguals (American Sign Language, ASL; German Sign Language, DGS) and hearing control groups. Unbeknownst to the participants, half of the presented word pairs had phonologically related sign language translation equivalents, and half had unrelated translation equivalents. Phonologically related translation equivalents shared at least two of three phonological parameters (handshape, location and/or movement). In accordance with our predictions, the results showed significant effects of phonological relatedness on the performance in the deaf groups, but not in hearing controls. Thus, deaf bilinguals seem to automatically activate the sign translations of written words – even if there are no sign language stimuli present. Subsequently, we aimed at investigating the development of bilingual lexical processing. Applying the implicit priming paradigm that was used with adults (Morford et al., 2011; Kubus, Villwock, et al., 2015), we asked deaf ASL-English bilingual (N = 39, ages 11-15) and hearing English monolingual (N = 26, age 11-14) children to participate in an English semantic relatedness task. In their model of bilingual development in signing bilinguals, Hermans et al. (2008) have suggested that deaf children access second language (L2) word forms through lexical mediation in the first language (L1). In this case, it could be expected that deaf children, like deaf adults in previous studies, will show co-activation of written words and signs. However, a parallel activation of signs and words could also be the result of activating and using both languages over many years. In this case, it could be expected that deaf children will show no or less evidence of co-activation than adults. In accordance with the first prediction, the results showed a significant effect of sign phonology in the deaf children, but not the hearing controls. We conclude that co-activation of the L1 and L2 is already present in signing deaf bilingual children, and that language processing is non-selective regardless of the degree of similarity of a bilingual’s two languages.

2018-05-08

What Makes a Language Weird? Investigating Correlates of Language Rarities

Arthur Semenuks

UCSD Cognitive Science

+ more

The existence of absolute linguistic universals (ALUs) would be useful and important for linguistic theory. Knowing that human languages always have (or don’t have) a particular feature accomplishes two jobs: on the one hand it limits what dimensions of variation need to be taken into account when describing the structure of a language, and on the other it is likely to have consequences for understanding cognitive or extracognitive underpinnings of language structure and the dynamics of language evolution and change.

However, even though their existence would be useful, are there any (non-trivial) ALUs? Their existence needs to be supported by typological work, but it has been noted that given enough time counterexamples to proposed absolute universals are often found by typologists, which then "downgrades” those ALUs to statistical tendencies (Dryer, 1998; Evans & Levinson, 2009). Additionally, computational modelling suggests that due to sampling limitations rarely can positive typological evidence alone warrant considering a pattern an ALU, even for cases when that pattern is universally observed in a large number of languages surveyed so far (Piantadosi & Gibson, 2014).

In the talk I will argue that by looking in the other direction and investigating what types of languages tend to have rara – features that are extremely typologically uncommon, limited to only a handful of languages – provides important complementary evidence, as it can be used for understanding what environments lead to languages developing rare structural patterns, including those violating proposed ALUs, and thus why rarities are rare, why more common features are more common, and how categorical are the limits of linguistic variation.

The current work used the University of Konstantz database of linguistic rarities (Plank, 2006) to investigate whether sociogeographical and phylogenetic factors are correlated with the probability of a language having rara. Analyses suggest that (i) European languages, (ii) languages spoken by larger communities, and (iii) language isolates tend to have rara more often. One potential explanation for each of the findings is that those groups of languages are indeed more cross-linguistically unusual because of historical and/or sociocultural factors. Alternatively, the findings could be explained by the increased attention given to these particular types of languages by linguists. In the talk I will discuss the implications of both interpretations for the study of linguodiversity, and argue for the latter interpretation for (i) and (ii), and for the former interpretation for (iii).

2018-05-01

Pragmatic Accommodation & Linguistic Salience in Russian Political Discourse

Lindy Comstock

UCLA Applied Linguistics

+ more

Politics interviews often center around polarizing issues that evoke a display of stance through pragmatic cues. Therefore, this genre serves as an ideal setting for the study of intercultural speech accommodation. Psychological speech accommodation theory and sociolinguistic studies predict that when speakers affiliate, they will attempt to mirror features of their interlocutor's speech in their own language and will produce dissimilar features upon disaffiliation. Yet the ability to reliably perceive and produce certain types of linguistic phenomena may be linked to neurological development within a critical period of language acquisition. Thus, L2 or heritage speakers may lack native-like awareness of linguistic systematicity, relying instead upon an impressionistic understanding of the linguistic regularities they perceive. How then do L2 and heritage speakers accommodate, and can speech accommodation occur felicitously in intercultural communication? This study analyzes which linguistic phenomena—prosody or formulaic phrases—are preferentially assimilated by Russian and American political actors when speaking their L2 or heritage language to a native audience. Prosody and lexical items are theorized to differ in their degree of perceptual saliency for L2 and heritage speakers: the successful acquisition of prosody has been associated with age of acquisition, whereas the ability to learn lexical items continues to grow into adulthood. Both prosody and formulaic phrases may also function as pragmatic resources, which tolerate greater idiosyncratic use. Therefore, we expect heritage speakers will successfully accommodate with prosodic phenomena in linguistically systematic ways, while L2 speakers will fail to accommodate prosodically or utilize these phenomena impressionistically. Both groups should display equal proficiency in assimilating formulaic phrases. To the contrary, case studies of four L2 and heritage speakers support a preference for prosodic accommodation among all subjects and suggest a disassociation between traditional measures of linguistic proficiency and the ability to reliably detect and reproduce linguistic systematicity in prosodic phenomena. Findings also suggest a novel methodology for detecting linguistic accommodation in intercultural communication.

2018-04-17

Effects of grammatical variations on predictive processing

Nayoung Kwon

Konkuk University, Seoul, Korea

+ more

In this talk, I present a series of experiments that investigated how grammatical variations affect predictive processing. In the first experiment, I show that highly constraining contextual information may lead to preactivation of fine-grained semantic features in advance of bottom-up input even in a language without overt morpho-syntactic markers that correlate with these semantic features as in Chinese (Kwon & Sturt, 2017; see Wicha et al., 2003 & Van Berkum et al., 2005 for gender manipulation in Spanish and Dutch respectively; Szewczyk & Schriefers, 2013 for animacy manipulation in Polish). In the second series of experiment, turning my attention to structural predictions I discuss experimental results suggesting that structural predictions can be constrained by surface or grammatical characteristics. That is, in contrast to highly predictive dependency formation in a syntactic anaphoric dependency (cf. Cai et al., 2013), the influence of predictive dependency formation is more limited in a cataphoric dependency (Kwon & Sturt, 2014), despite the observation that similar cognitive mechanisms underlie the processing of anaphoric and cataphoric dependencies (Kwon, Kluender, Kutas, & Polinsky, 2013). I discuss potential implications of these findings in the talk.

2018-04-10

Morphology and Memory

Ray Jackendoff

Tufts University

+ more

We take Chomsky’s term “knowledge of language” very literally.  “Knowledge” implies “stored in memory,” so the basic question of linguistics is reframed as

What do you store in memory such that you can use language, and in what form do you store it?

Traditionally – and in standard generative linguistics – what you store is divided into grammar and lexicon, where grammar contains all the rules, and the lexicon is an unstructured list of exceptions.  We develop an alternative view in which rules of grammar are lexical items that contain variables, and in which rules have two functions.  In their generative function, they are used to build novel structures, just as in traditional generative linguistics.  In their relational function, they capture generalizations over stored items in the lexicon, a role not seriously explored in traditional linguistic theory.  The result is a lexicon that is highly structured, with rich patterns among stored items.

We further explore the possibility that this sort of structuring is not peculiar to language, but appears in other cognitive domains as well.  The differences among cognitive domains are not in this overall texture, but in the materials over which stored relations are defined – patterns of phonology and syntax in language, of pitches and rhythms in music, of geographical knowledge in navigation, and so on.  The challenge is to develop theories of representation in these other domains comparable to that for language.

2018-04-03

The dynamic nature of the bilingual language system

Eve Higby

University of California, Riverside

+ more

A common assumption among language researchers is that the native language, once acquired, remains largely the same throughout the lifespan. However, recent work on bilingualism has shown that the native language is much more dynamic than previously thought, demonstrating the plasticity of the language system to various patterns of language input and use. I will present data from behavioral, electrophysiological, and neuroimaging studies focusing on two major subsystems of language: the lexicon and the syntactic system. First, I will examine how bilinguals choose the right words in spoken language given that lexical items in both languages are thought to compete for selection. Next, I examine how integrated the syntactic systems are for bilinguals by probing the vulnerability of the syntactic system to undergo change. Results across these studies reveal that bilinguals make use of all the linguistic tools at their disposal to efficiently produce and comprehend language. Further, this work uncovers characteristics of the language system that are difficult to obtain from work on monolinguals – that the language system adapts to various contexts and the demands placed on it in dynamic ways.

Eve Higby is a postdoctoral researcher in Psychology at the University of California, Riverside. She is supported by a University of California Chancellor's Postdoctoral Fellowship and a National Science Foundation Postdoctoral Research Fellowship. She received her Ph.D. in Speech-Language-Hearing Sciences from the City University of New York Graduate Center in 2016. Her work focuses on language processing in bilingualism and aging and how language is supported by other aspects of cognition.

2018-03-13

Re-revisiting [sic] Whorf: Spatial frames of reference in the Ryukyu Islands

Rafael Núñez1 in collaboration with Kenan Celik & Yukinori Takubo2

1 UCSD Cognitive Science
2 National Institute for Japanese Language and Linguistics

+ more

The Sapir-Whorf (or linguistic relativity) hypothesis states, roughly, that language influences or even restructures thought and cognition. An important arena for testing this hypothesis has been the study of spatial frames of reference. Studies done in different parts of the world have documented that speakers from some languages consistently describe table-top spatial relations using ego-centric “relative” terms (e.g., the apple is left of the glass) while speakers of other languages prefer allo-centric “absolute” terms (e.g., the apple is east of the glass). Furthermore, when performing non-linguistic tasks (e.g., memory tasks) these speakers tend to produce behaviors that correlate with the frames of reference foregrounded by the languages they speak. A common interpretation of these results is that language plays a significant role in restructuring fundamental domains, like space. As Whorf also pointed out, however, “language and culture are constantly influencing each other”, so the directionality (and linearity) of the causation effecting thought and cognition might not be as clear as we might think. Moreover, in the above findings there is often a confound as “relative” responses tend to be produced by speakers who share certain cultural traits such as living in urban areas, being schooled (and therefore being literate), and belonging to large-scale linguistic communities, while “absolute” responses are often produced by speakers who share other cultural traits such as living in rural (often isolated) areas, with little or no schooling (and often being illiterate) and belonging to small-scale linguistic communities. As an attempt to partially disentangle these factors, in this talk I’ll present two studies (of an ongoing project) conducted among speakers of Miyako, one of the (endangered) indigenous languages of the Ryukyu Islands spoken in the western end of what is today the Okinawa prefecture in Japan. Japanese and Miyako are members of the Japonic language family, are mutually unintelligible, and with respect to spatial frames of reference are structurally equivalent (i.e., both languages have distinct lexemes and grammatical resources for dealing with “relative” and “absolute” frames of references). But, importantly, while Japanese has been labeled as a “relative” language for table-top tasks, Miyako has been described as “absolute”. Since Miyako speakers are rather well schooled and are fully bilingual Japanese-Miyako, studying their responses to the above psycholinguistic tasks provide an interesting opportunity for teasing apart linguistic from cultural factors influencing thought and cognition.
(Funded by Japan Society for the Promotion of Science)

2018-03-06

From information density in dialogue to language acquisition:
Neural language models as models of human language processing

David Reitter Penn State

+ more

The theory of predictive coding maintains that simple, implicit expectations about what happens next in our environment help us perceive and disambiguate a complex world.  This principle is no stranger to language processing.  But how do we arrive at these predictions, and how do they influence our linguistic behavior?  In this talk, I review some recent results from our lab that adopt predictive language models as they are commonly found in natural-language processing systems.  With these, we quantify information density in language via an entropy-like measure: unexpected input is carries more information. A study of dialogue [1] shows that speakers systematically converge in their information density, revealing a topic structure.  I provide a closer look at how these models work and can be improved [2], and I examine whether a language model can be trained better by exposing it to a visual environment, akin to situated language acquisition that human L1 learners experience.  Are language models a starting point for models of a cognitive process?

2018-02-27

The prevalence of repair in studies of language evolution

Ashley Micklos UCLA

Vinicius Macuch Silva University of Tübingen

Nicolas Fay University of Western Australia

+ more

While studies of language evolution have themselves evolved to include interaction as a feature of interest (Healey et al, 2007; Tamariz et al, 2017; Fay et al, 2017; Byun et al, in press), many still fail to consider just what interaction offers emerging communication systems. That is, while it’s been acknowledged that face-to-face interaction in communication games is beneficial in its approximation of natural language use (Macuch Silva & Roberts, 2016; Nölle et al, 2017), there remains a lack of detailed analysis of what this type of interaction affords participants, and how those affordances impact the evolving language. To this end, here we will expose one particular process that occurs in interaction: repair, or the processes by which we can indicate misunderstanding and resolve problems in communication (Schegloff, Jefferson, & Sacks, 1977; Jefferson, 1972). Though it is often not explicitly analyzed, repair is a relevant aspect of interaction to consider for its effects on the evolution of a communication system as well as how it demonstrates the moment-to-moment processing and negotiation of alignment in emerging communication.

We present data from various studies of language evolution in which we document how repair is carried out, the types of repair present, and their effect on novel signaling. These studies vary in modality, an aspect that will be discussed for the affordances it provides for doing repair. Repair was found in 10-20% of experimental trials, depending on the degree of interactivity allowed. Repair trials were found to improve communicative efficiency by promoting a signal’s informativeness. We find repair is ubiquitous across modalities and - even when not being directly tested - it is a factor that arises in, and affects the processes of, emergent communication. Namely, it serves as a resource for overcoming miscommunication and to establish alignment to more efficient signals. More broadly, we hope to call attention to not only the need to consider interaction as an ecologically valid site for language evolution and use, but also to consider the specific mechanisms within interaction that drive language to be structured as it is.

2018-02-20

Orthographic priming by printed English letters and fingerspelling fonts

Zed Sevcikova Sehyr San Diego State University

+ more

Deaf ASL signers experience English orthography in two forms: printed letters and fingerspelled letters. The contribution of English letter and fingerspelling recognition to orthographic processing and reading in deaf readers remains unclear.

In a single letter priming paradigm, we investigate the electrophysiological characteristic of letter and fingerspelling representations. Event-related brain potentials (ERPs) were recorded over 29 scalp sites while participants performed a probe detection task (detect the printed or fingerspelled letter Y). Targets were single letters presented in a typical printed English font or in an ASL fingerspelling font, tested in separate blocks, and presented centrally for 200ms immediately preceded by a 100ms prime that was either an English letter or a fingerspelling font.

Twenty deaf ASL signers participated in the study. The data suggested that fingerspelling primed letters, but letters did not prime fingerspelling. That is, when letter targets were preceded by fingerspelling primes, the N2 component was larger to unrelated compared to repeated letter targets. When fingerspelling targets were preceded by letter primes, there was no difference between repeated and unrelated pairs within the N2 window suggesting the absence of a priming effect. These findings indicate that early in processing, fingerspelling fonts are mapped to English letter representations, but English letters do not activate fingerspelling. This pattern is consistent with previous research indicating that deaf ASL signers recode fingerspelled words into English in short-term memory, whereas printed words are not recoded as fingerspelling (Sehyr, Petrich, & Emmorey, 2016) and might have important implications for skilled reading in deaf population.

Sehyr, S. Z., Petrich, J., and Emmorey, K. (2016) Fingerspelled and Printed Words Are Recoded into a Speech-based Code in Short-term Memory. Journal of Deaf Studies and Deaf Education 22 (1).

2018-02-13

How infants without verbs talk about motion

Jean Mandler UC San Diego Cognitive Science

+ more

Young infants attend to motion even more than to the objects making the motion. So why in many (although not all) languages are words about motion later to appear than words for objects? I discuss this phenomenon, emphasizing the earliest words in English, showing that motion actually is referred to in several ways other than verbs. Because expressing actions in English typically requires more than one word (e.g., come in, go out, etc.) newly verbal infants find other ways to talk about what interests them. I use as examples how often heard words like "hi" are used to express coming in and "up" to express a request to be picked up.

2018-02-06

Directional verb constructions under construction:
The case study of San Juan Quiahije Chatino Sign Language

Lynn Hou UC San Diego Linguistics

+ more

Directional verbs or so-called verb agreement morphology in emerging signed languages has attracted much scholarly attention in recent years. In sign language linguistics, directional verbs constitute one class of verbs of transfer and motion that “point” to (or encode) their arguments through spatial modification of verb forms for marking argument structure constructions. Elicited studies of different emerging sign languages suggest that second- and third-generation deaf signers and later cohorts of deaf children with earlier language exposure create a more complex system of directionality.

Yet little is known about how signing children and adults produce those verbs in actual usage events. Moreover, little is also known about how this process occurs in a lesser-studied sociolinguistic context without critical masses of child peers in an educational institution for the deaf. I address this gap with the case study of an emerging sign language in a hybrid of ethnographic and usage-based linguistics framework.

San Juan Quiahije Chatino Sign Language is a constellation of family sign language varieties that recently originated among eleven deaf people and their families in a rural indigenous Mesoamerican community in Oaxaca, Mexico. In this talk, I give a brief overview of the signing community, showing some similarities and differences in the language ecology of each family. I focus on how language emergence and acquisition is most robust in one extended signing family of first- and second-generation signers, analyzing their usage of directional and non-directional verbs.

An elicitation task reveals that the signers do not encode their arguments in directional verbs and instead rely on constituent order and/or multiple single-argument clauses. Spontaneous conversations, on the other hand, show that signers deploy pointing constructions for indicating arguments and incorporate them into a handful of verbs, particularly the verb of giving. The first-generation signer’s system treats directional verbs as separate networks of constructions and second-generation signers learn them in a piecemeal fashion but do not overgeneralize them to other verbs. Those findings highlight the role of usage events, along with the benefit of input, in the emergence and acquisition of directionality in a new rural sign language.

2018-01-30

The functional neuroanatomy of syntax

William Matchin UC San Diego Linguistics

+ more

Syntax, the combinatorial system of language, is a powerful engine of creativity and arguably a defining human trait. Accordingly, there is great interest in understanding its biological bases. However, the neurobiology of syntax is heavily disputed, with some researchers arguing for its localization to the inferior frontal gyrus (IFG) and others arguing for its localization to the temporal lobe. I present a new neuroanatomical model of syntax, supported by neuroimaging data, that illustrates how both systems contribute to syntax but with distinct computational roles. The posterior temporal lobe processes hierarchical structures that interface with semantic systems in the ventral stream, while the IFG, particularly the pars triangularis, processes linear morphological representations that interface with articulatory systems in the dorsal stream. Combined, these regions interact as a syntactic working memory network. I will also discuss a neuronal retuning hypothesis of language-specificity in the pars triangularis: that the syntactic sequencing function of this region emerges as an interaction of dorsal stream sequencing computations and hierarchical representations in the ventral stream. This model provides insights into the nature of grammatical deficits in aphasia and potentially developmental language disorders as well.

2018-01-23

A structure that no one knows why we use it

Adam Morgan UCSD Psychology

+ more

English resumptive pronouns, like the "it" in "a structure that no one knows why we use it," pose a problem for how we think about language.  They are generally thought to be ungrammatical, but speakers regularly and reliably produce them.  What might account for this puzzling state of affairs?  One long-standing idea is that they may serve as a last-ditch option -- a way for the production system to ameliorate what will otherwise be an even more unacceptable gap structure, as in, "a structure that no one knows why we use _."  Supporting evidence for this claim comes from recent work showing that the worse a gap sounds in a given structure, the more likely speakers are to produce a resumptive pronoun there instead.  Based on this finding, we previously proposed that the production system bases the decision to produce a resumptive pronoun on an assessment of how (un)acceptable an upcoming utterance will be without it.  Here we report four experiments testing the hypothesis that unacceptability triggers the production system to produce resumptive pronouns.  We conclude that (un)acceptability cannot play a causal role in the decision to produce a resumptive pronoun, and instead it must be based on properties of the abstract structural makeup of the utterance.

2018-01-16

Conceptual Mappings in Brain and Mind

Seana Coulson Cognitive Science, UCSD

+ more

I will discuss the importance of metaphoric and analogical mapping as organizing structures in cognition, and suggest that maps and mappings are a fundamental aspect of neurophysiology. In order to demonstrate the role of mapping in language comprehension, I'll present results from several event-related potential (ERP) studies on the comprehension of metaphoric language. Finally, I consider the extent to which synesthesia provides a good model for the neural basis of metaphor.

2017-12-05

Errata Corrige: The Pragmatics of Error Correction

Leon Bergen UCSD Linguistics

+ more

Natural language provides a great deal of flexibility for repairing errors during speech. For example:

1a) Bob went to the store. Sorry, Bob went to the restaurant.
1b) Bob went to the store. Sorry, the restaurant.

Example 1a) is compatible with a simple theory of how errors are corrected: when an error occurs, simply replace the first sentence (containing the error) with the second (correct) sentence, and throw away everything that was said in the first sentence. While such a theory is appealing, it cannot explain examples like 1b), in which the first (error-containing) sentence provides content necessary for interpreting the second (correct) sentence. In this talk, I will examine how people decide which content to throw away when they hear an error, and how the remaining content is used to interpret later portions of the discourse. I will propose a general theory of error correction in which these choices are made using Bayesian pragmatic reasoning. The theory explains the effects identified by previous semantic approaches (Asher & Gillies 2003, van Leusen 2004, Lascarides & Asher 2009, Rudin et al. 2016), and makes several novel predictions.

2017-11-28

The role of sensorimotor processes in language comprehension

Barbara Kaup University of Tübingen

+ more

According to the embodied-cognition framework of language comprehension, sensorimotor processes play an important role for meaning composition: During language processing, comprehenders are assumed to mentally simulate the objects, situations and events referred to in the linguistic input. More specifically, it is usually assumed that words automatically activate experiential traces in the brain that stem from the comprehenders’ interactions with the referents of these words. When words appear in larger phrases or sentences, the activated experiential traces are presumably combined to yield an experiential simulation consistent with the meaning of the larger phrase or sentence. Abstract concepts are assumed to be captured in these simulations by being metaphorically mapped onto more concrete experiential dimensions, and linguistic operators such as negation or disjunction are typically considered to function as cues controlling specific integration processes.

In my talk, I will give an overview of experimental work conducted in my lab investigating these assumptions. In addition, I will present some preliminary results from experiments investigating developmental aspects, which shed further light on the embodied-cognition framework.

2017-11-21

Cortical encoding of intelligible speech

Jonathan Venezia VA Loma Linda

+ more

A crucial challenge in language neuroscience is to describe the mechanism by which linguistic information is extracted from the acoustic speech signal and represented in the brain.  This is particularly difficult in light of the fact that acoustic speech features do not map straightforwardly to perceptual or linguistic representations.  Further, neural coding of the likely information-bearing elements of the speech signal – frequency sweeps, amplitude modulations, etc. – has been described largely in terms of the neural response to simple synthetic stimuli, rather than the speech signal itself.  Here, I describe a paradigm, Auditory Bubbles (aBub), which uses a single representational space – the spectrotemporal modulation domain – to link acoustic patterns directly to perceptual and neural representations of speech.  Spectrotemporal modulations (STMs) are fluctuations in speech energy across time and frequency that are known to convey linguistic and other communicative information.  Briefly, aBub is a classification procedure designed to identify the particular STMs that support intelligible speech perception and/or maximally drive the neural response to speech.  The procedure works by filtering STMs from the speech signal at random and relating the filter patterns to changes in an outcome measure – e.g., word recognition performance or amplitude of neural activation – in order to measure which STMs drive differences in the outcome.  In previous work, I have used aBub to demonstrate that a small range of STMs contributes maximally to speech intelligibility.  Here, I present an fMRI study that uses aBub to derive speech-driven spectrotemporal receptive fields (STRFs) in the human auditory cortex.  Data-driven clustering reveals a natural hierarchical organization of STRFs in which early cortical regions encode a broad range of STM features while later regions (e.g., the superior temporal gyrus/sulcus) encode the relatively narrow range of STM features important for intelligibility.  I present additional evidence that STRFs within each level of the hierarchy are specialized for extraction of particular subsets of (intelligible) speech information.

2017-11-14

How the Brain Got Language: Towards a New Road Map

Michael Arbib Adjunct Professor of Psychology, UCSD
University Professor Emeritus, USC

+ more

This talk will report on a workshop that was part of the ABLE (Action, Brain, Language, Evolution) Project and was held in La Jolla last August. It approached the challenge from the viewpoint of comparative (neuro)primatology, comparing brain, behavior and communication in monkeys, great apes and humans. Researchers in cognitive science, anthropology, (neuro)linguistics, archeology and more used their own research as the basis for pinpointing key aspects of language and placing them within an evolutionary framework. A key notion was that the evolutionary basis for many of these features was rooted in action, emotion or aspects of social interaction, rather than in communication per se. The talk will offer a sample of research highlights and stress their relevance for charting future research on the evolution of the language ready brain.

2017-11-07

Protracted auditory-perceptual learning during development
(Preview of Psychonomics talk)

Sarah Creel and Conor Frye UCSD Cognitive Science

+ more

Many theories suggest that auditory pattern learning proceeds rapidly and hierarchically. For example, infants are thought to solidify their native-language sound representations in the first year of life. These sound categories then allow efficient storage of the sound forms of words, the next level in the hierarchy. A very different interpretation is that infants’ early learning reflects only the beginnings of a gradual accretion of sound pattern knowledge, from which speech sounds, words, and other auditory objects emerge.

We discuss findings from 3-5-year-old children’s word learning and music perception. When auditory patterns are similar to each other (e.g. novel words zev, zef), children have difficulty retaining auditory-visual associations. This is inconsistent with accounts of early perceptual expertise in either domain, but is more consistent with auditory perceptual pattern learning that proceeds slowly over development. Another implication is that auditory pattern learning in language is not “special;” rather, it is fundamentally similar to nonlinguistic auditory pattern learning. Implications for theories of language and auditory development are discussed.

2017-10-31

Effects of childhood language deprivation on white matter pathways:
Evidence from late L1 learners of ASL

Qi Cheng UCSD Linguistics

+ more

Previous research identified ventral and dorsal white matter tracts to be crucial for language processing, and their maturation correlates with syntactic development. Unknown is whether growth of these language-relevant pathways is solely biologically determined, or also shaped by early learning. To investigate the effects of early language deprivation on brain connectivity, we examined white matter connectivity of language-relevant pathways among adults who were born deaf with or without early access to American Sign Language. We acquired diffusion MRI data from three deaf individuals who experienced minimal language during childhood and from 12 deaf native signers. Compared with the native group, all three cases demonstrated significantly lower fractional anisotropy for the left dorsal arcuate fasciculate tract, but not for other language-relevant tracts. Our findings indicate that growth of the language pathways are not solely driven by biological maturation, but also require language acquisition during childhood.

2017-10-24

Growing-up Bilingual: Implications for Cognitive and Brain Development in Spanish-English Bilingual Children

Marybel Robledo Gonzalez Children's Hospital Los Angeles

+ more

The acquisition of a second language often occurs during early childhood, a period of dynamic change in cognitive, language, and brain development.  This makes childhood an exciting time to study bilingualism in relation to developing cognitive skills and brain structure.  It is thought that bilinguals must resolve competing activation of both languages and recruit mechanisms of executive control to inhibit one language and select or switch to the target language.  While several studies have reported a positive effect of bilingualism on cognitive skills among bilinguals, an ongoing debate raises the question whether such an effect is only restricted to select bilingual populations with specific characteristics.  In fact, bilingual children vary widely in characteristics of language experience, such as how much one language is practiced relative to the other, and how often both languages are mixed in speech.  In the present study, I examined individual differences in language experience among bilingual children in relation to cognitive skills and developing brain structure.  Fifty-three Spanish-English bilingual children, ages 5 – 13 years of age, were tested on cognitive measures of response inhibition, inhibitory control, and cognitive flexibility.  Diffusion-weighted measures of white matter microstructure were also acquired.  Findings from this study suggest that for bilingual children, a more balanced language environment was a predictor of better response inhibition, while higher frequencies of language mixing predicted worse inhibitory control.  Further, specific characteristics of language experience were related to more mature white matter characteristics in the anterior cingulum, a brain region thought to support language control in bilinguals.  These findings support the idea that language experience variability among bilinguals is an important factor in examining the effects of bilingualism on cognitive skills and brain structure.

2017-10-17

Conditional blocking in Tutrugbu requires non-determinism: implications for the subregular hypothesis

Adam McCollum, Eric Baković, Anna Mai, and Eric Meinhard UCSD Linguistics

+ more

Early work in computational phonology demonstrates that the types of rules used to define phonological transformations belong to the class of regular languages in the Chomsky Hierarchy. More recent work has contended that attested phonological transformations belong to some computationally less complex class, somewhere in the subregular hierarchy. Heinz & Lai (2013), for example, argue that attested vowel harmony patterns are weakly deterministic while unattested, “pathological” patterns are more complex than this, if not non-regular. Contra Heinz & Lai (2013), we demonstrate that the ATR harmony pattern of Tutrugbu (McCollum & Essegbey 2017) is not weakly deterministic. We argue that there is no hard boundary between subregular and regular phonology, but that subregularity exists as a computational simplicity bias, in line with previous work on learning biases in phonology (e.g. Moreton & Pater 2012).

2017-10-10

The cortical organization of syntactic processing in American Sign Language

William Matchin UCSD Linguistics

+ more

Syntax, the ability to combine words into sentences, is a core aspect of language, allowing humans to create and communicate new ideas. Despite the fact that American Sign Language (ASL) uses a different sensory-motor modality of communication than spoken languages, it has the same linguistic architecture. This suggests that syntactic processing in ASL could involve the same cortical systems as spoken languages; however, the neural bases of syntactic processing in signed languages have not been clearly established. In order to identify the neural networks involved in syntax, we presented stimuli differing parametrically on degree of structure. Degree of syntactic complexity (maximum constituent size) was correlated with activity in the left posterior and anterior portions of the superior temporal sulcus, brain regions previously showing this same correlation in hearing speakers during the presentation of written French. Sequences of words contrasted with a low-level visual task elicited activity in posterior superior temporal sulcus, as well as bilateral occipital-temporal regions involved in motion and object perception, likely underlying the perception of sign motion and sign handshape/orientation. Overall, our results indicate that core aspects of language are unaltered by the sensory-motor systems involved in communication, and that sign languages involve an analogous  “ventral stream” organization that has been proposed for spoken languages.

2017-10-03

A shared features model of the representation of irregular polysemes

Andreas Brocher University of Cologne

+ more

The representation and retrieval of homonyms (e.g., bank, calf) have been studied for more than 40 years. It has been shown that (a) more frequent meanings are accessed and selected more quickly than less frequent meanings and (b) supporting context can boost the availability of less frequent meanings to or beyond the availability of the more frequent meanings. The extensive research on homonyms has then led researchers to argue in favor of a separate representations account of homonyms. Surprisingly, the processing of irregular polysemes (e.g., cold, cone) has been widely neglected and only recently caught some attention in the field. Irregular polysemes are lexically ambiguous words whose senses are semantically related in an idiosyncratic way (unlike for regular polysemes, e.g., chicken, Vietnam). I will present data from continuous lexical decision and sentence reading eye tracking experiments showing that irregular polysemes (a) show no frequency effects in presence of neutral context and (b) show between-sense competition in the presence of context supporting the less frequent sense. These data are most compatible with the view that the multiple senses of irregular polysemes overlap in their representation. Activation of the shared features, in the absence of biasing context, leads to sense non-commitment. Activation of the unshared features, in the presence of biasing context, leads to between-sense competition.

2017-06-06

Verb Phrase Ellipsis is discourse reference: Novel evidence from dialogue

Till Poppels UCSD Linguistics

+ more

Some of the most fascinating questions about natural language live at the interface between linguistic and non-linguistic cognition. As a case in point, in this talk I will discuss a linguistic construction known as "Verb Phrase Ellipsis" (VPE), which is illustrated in the following example:
(1) Jordan was planning to go to the mall, but Sofia didn't want to.
A key question in the VPE literature is how the ellipsis clause (Sofia didn't want to) gets to mean what it does, given that its overt linguistic material leaves a great deal unspecified (in this case, that Sofia didn't want to go to the mall). The majority view of VPE takes it to be a "purely linguistic" phenomenon and attempts to model it strictly within the syntax or semantics of the linguistic context. I argue that this is a mistake, and that VPE lives at the interface between linguistic and non-linguistic cognition. I will draw on novel experimental evidence to demonstrate that VPE can break out of the linguistic context, and that it does so in a way that heavily depends on world knowledge. In light of this evidence, which is fundamentally inconsistent with "purely linguistic" theories VPE, I argue for an alternative theory that models VPE as as discourse reference. Time permitting, I will make the case that theories of VPE can be implemented as causal models in a way that makes their assumptions and predictions maximally explicit, and sketch such implementations for four types of theories that have been proposed in the literature.

2017-05-30

Incremental, top-down expectations shape segmental confusability and communicative function: implications for theories of perceptibility effects

Eric Meinhardt UCSD Linguistics

+ more

Motivated in part by the hypothesis that the structure of natural languages is shaped by communicative forces (see e.g. Lindblom, 1990), research in Phonetically-Based Phonology (Hayes & Steriade, 2004) has shown that a variety of cross-linguistic sound patterns (e.g. asymmetries in the types and positions of sounds that are triggers vs. targets of consonant cluster assimilation) are explainable with reference to the relative perceptibility of different segment sequences. Such work argues that these patterns motivate positing a large set of Optimality Theoretic constraints directly encoding the relative confusability of all sound-phonotactic environment pairs as a part of an individual’s phonological knowledge (part of ‘Universal Grammar’). Crucially for this talk, the kinds of environments considered to date to affect confusability of a given segment token have been relatively local, e.g. limited to adjacent segments. It is well-established, however, that language processing is incremental and integrates top-down expectations with bottom-up evidence - acoustic or otherwise (Marslen-Wilson, 1975; Tanenhaus et al, 1995).
Through information-theoretic analysis (Nelson, 2008; Ince, 2016) of a Bayesian word recognition model in a simple task where incrementally accumulated context cues can interact with the acoustics of each segment, I derive an expression for the communicative value of the acoustics of each successive segment in a wordform where the interaction of top-down expectations with a segment’s confusability is explicit. I then identify several structural predictions of this expression about where in a wordform or lexicon existing phonological theories either over- or under-estimate the effect of a segment’s confusability on successful word recognition by the listener. I close by arguing that the unappreciated additional complexity and architectural properties a phonological theory would need to account for this variation make alternative theories (Ohala, 2005; Blevins, 2004) that leave the explanatory burden of perceptibility effects in phonotactics to phonetics, psycholinguistics, and models of language change more compelling.

2017-05-23

Cognitive Effort in Direct and Inverse Translation Performance: Insight from Eye-Tracking Technology

Aline Ferreira UC Santa Barbara

+ more

Directionality in translation (translation from the first language into the second language and from the second language into the first language) is a fairly new area of research in translation studies. It has been observed an increase in the number of studies that contribute to understanding the cognitive mechanisms that are involved in the translation process, but there are several gaps in what we know about the practice of inverse translation (IT, translation from the first language into the second language) in contrast to direct translation (DT, translation from the second language into the first language) and as such, additional studies are necessary to make further advancements in Translation Process Research.
In this talk I will examine the translation performance of four professional translators with the aim of exploring the cognitive effort involved in direct and inverse translation. Four professional translators translated two comparable texts from English into Spanish and from Spanish into English. Eye-tracking technology was used to analyze the total time spent in each task, fixation time, and average fixation time. Fixation count in three areas of interest was measured including: source text, target text, and browser, used as an external support. Results suggest that although total time and fixation count were indicators of cognitive effort during the tasks, fixation count in the areas of interest data showed that more effort was directed toward the source text in both tasks. Overall, this study demonstrates that while more traditional measures for translation difficulty (e.g., total time) indicate more effort in the inverse translation task, eye-tracking data indicate that differences in the effort applied in both directions must be carefully analyzed, mostly regarding the areas of interest.

2017-05-16

Effects of grammatical gender on object description and conceptualization

Arturs Semenuks UCSD Cognitive Science

+ more

Can the structure of the language one speaks affect how they conceptualize the world around them? Studying whether a language with a grammatical gender system changes how its speakers conceptualize referents of nouns allows us to probe this question. In experiment 1, we investigated whether the grammatical gender of a noun in one's native language affects what adjectives non-native, but proficient speakers of English generate for English nouns. The results showed that participants generated more feminine adjectives for nouns with majority feminine translations compared to nouns with majority masculine translations, and that some semantic categories elicited a stronger effect than others. We also found that adjectives generated for the noun later exhibited a larger effect of grammatical gender. In experiment 2, we taught native English speakers a simplified grammatical gender system in the lab and afterwards tested whether that affected the perceived similarity between the experimental stimuli using a non-linguistic task. The subjects tended to rate pictures of referents of nouns as more similar if they were in the same gender. The effect was observed also in cases where participants were simultaneously engaged in a verbal interference task. Taken together, these findings support the view that grammatical gender changes representations of entities on the conceptual level.

2017-05-09

Investigating semantic structure with structural priming

Jayden Ziegler Harvard University

+ more

Structural priming, or the tendency to repeat aspects of sentence structure across utterances, provides strong evidence for the existence of abstract structural representations in language (Bock, 1986; for reviews, see, e.g., Branigan & Pickering, 2016; Pickering & Ferreira, 2008). Despite a general consensus that structural priming is primarily a syntactic phenomenon (Bock, 1989; Bock & Loebell, 1990; Chang et al., 2006; Branigan & Pickering, 2016; Branigan et al., 1995), I’ll present (further) evidence that semantic structure can be isolated and primed independently of syntax. Then, I’ll leverage structural priming as a tool to ask specific questions about the nature of these semantic representations—specifically, to what degree do different classes of verbs share or not share the same semantic core? And what, if anything, can this tell us about the mental representation of linguistic meaning? For this, I’ll use datives and locatives as a particularly compelling test case (think: Localist Hypothesis; Jackendoff, 1983). I’ll conclude with lessons learned and future directions.

2017-05-02

Harry Potter and the Chamber of What?:
Brain potentials to congruous words are systematically related to knowledge

Melissa Troyer UCSD Cognitive Science

+ more

To understand language, we must connect linguistic input with long-term knowledge of words and concepts. Though such knowledge will vary from person to person as a function of experience, discussions of language processing often neglect this variability. In this talk, I describe two experiments that investigate how variability in specific domain knowledge may influence access to semantic information during real-time language processing. We recorded EEG while participants more or less knowledgeable about the narrative world of Harry Potter (HP) read sentences. In Experiment 1, participants read control sentences (about general topics) and sentences taken from the world of Harry Potter. As expected, participants showed N400 predictability effects for general-knowledge sentences, but only those with high HP knowledge showed predictability effects for sentences about Harry Potter. This effect was driven by graded responses to predictable endings as a function of knowledge. In Experiment 2, we asked participants to make judgments as they read sentences about Harry Potter. We observed greater semantic activation (inferred from N400 effects) for HP items that participants reported knowing compared to those they did not. Notably, this was true for both high- and low-knowledge groups. In addition, our data suggest that high-knowledge (compared to low-knowledge) participants further show greater semantic activation overall, especially for items they reported not knowing/remembering. These findings suggest that during real-time language comprehension, knowledgeable individuals may benefit from rapidly, and perhaps implicitly, accessing knowledge structures that differ in amount and/or functional organization. Future studies will ask how knowledge may shape the organization and use of such knowledge structures during language processing.

2017-04-25

Mommy only really cares about my grammar! Child-mother dyads integrate a dynamical system

Fermin Moscoso del Prado Martin UC Santa Barbara

+ more

Both the language used by young children (child language; CL) and the simplified language used by caretakers when talking to them (child-directed speech; CDS) become increasingly complex along development, eventually approaching regular adult language. Researchers disagree on whether children learn grammar from the input they receive (usage-based theories), or grammars are mostly innate, requiring only minimal input-based adjustments on the part of the children (nativist theories). A related question is whether parents adapt the complexity of CDS in specific response to their children's language abilities (fine-tuning), or only in response to their level of general cognitive development. Previous research suggests that parent-child interactions can be modelled by nonlinear dynamical systems. Following this direction, I adapt a technique recently developed in Ecology --Convergent Cross-Mapping (CCM)-- for assessing causal relations between the longitudinal co-development of aspects of CL and CDS. CCM enables reconstructing a network of causal relations involving aspects of CL and CDS. This network supports a mutual bootstrapping between lexical and grammatical aspects of CL. In addition, the network reveals explicit couplings between the language used by individual children and their mothers. This provides explicit evidence for fine-tuning: Mothers adapt CDS in response to the specific grammatical properties of CL (but apparently not its lexical properties). Our findings verify the strong causal predictions of usage-based theories, and are difficult for nativist theories to account for.

2017-04-18

Prosodic and gestural prominence cues in multimodal speech processing

Diana Dimitrova UCSD Psychology

+ more

In speech communication listeners need to identify important information in order to focus their processing resources on it and arrive at a meaningful interpretation of the message. This process is facilitated by information structure – the division of a message in informative and less informative parts by means of linguistic cues like pitch accents and clefts. In multimodal communication listeners have additional access to nonverbal cues like beat gestures, which do not carry any semantic meaning but are aligned with the rhythmic structure of the message. The goal of the present EEG study was to test whether beat gestures have any focusing function in dialogue, similarly to pitch accent, and whether listeners integrate them with the information structure of the message. Our results show that prosodic and gestural cues both modulated early sensory and attention mechanisms in speech processing. Importantly, beat gesture and focus interacted in a late time window and gave rise to a Late Positivity effect when nonfocused information was accompanied by a beat gesture. Our results suggest that beat gestures fulfill a unique focusing function in multimodal discourse processing and that they have to be integrated with the information structure of the message.

2017-04-11

A speech core: Evidence from structural and functional brain connectivity and genetic phenotyping

Jeremy Skipper UCL

+ more

What neurobiological adaptation allows humans to produce and perceive speech so effortlessly compared to other animals? I show that speech is supported by a largely undocumented core of structural and functional connectivity between primary motor and somatosensory cortex and primary auditory cortex. Anatomically, I show that primary motor and auditory cortical thickness covary across individuals and that they are connected by white matter tracts. Neuroimaging network analyses confirm the functional relevance and specificity of these structural relationships. Specifically, primary motor and auditory cortex are functionally connected at rest, during natural audiovisual speech perception, and over a large variety of linguistic stimuli and tasks. Importantly, across structural and functional analyses, connectivity of regions immediately adjacent to primary auditory cortex are with premotor and prefrontal regions rather than the primary motor cortex. Finally, I show that this structural/functional primary motor and auditory cortex relationship is mediated by a constellation of genes associated vocal learning and disorders of prediction or efference copy. I suggest that this core network constitutes an early developing interface for rapidly exchanging articulatory and acoustic information. It supports vocal learning and speech production by providing a pathway for efference copies to be compared with sensory feedback, enabling error correction. This same mechanism is reused during speech perception to provide an articulatory constraint on acoustic variability, resulting in perceptual constancy.

2017-03-14

The Role of Working Memory in Language Representation and Processing

Robert Kluender UC San Diego

+ more

The current consensus across cognitive domains is that working memory may be a mere epiphenomenon arising from the attention system acting upon long-term memory representations. This idea has serious consequences for both the nature of the human lexicon as well for the nature of syntactic representation and processing. Likewise, the psychological reality of storage and/or maintenance functions (including capacity limitations), long considered a primary explanandum of the verbal working memory literature, has more recently been challenged and subjected to re-evaluation.

In this talk I present a brief overview of theoretical proposals regarding the general nature of working memory across cognitive domains, show how the language ERP literature on long-distance dependencies can successfully be recast in terms of encoding and retrieval to the exclusion of storage operations, and then turn to recent research in the visual working memory literature that presents intriguing parallels to issues prevalent in the verbal working memory literature.

I pose a number of questions for discussion that it seems to me need to be addressed going forward (to which – truth in advertising – I do not necessarily have any good answers): If working memory truly reduces to the focus of attention operating over long-term memory representations, then what is the exact nature of the linguistic representations in long-term memory over which the attention system operates in sentence processing? What is the nature of lexical entries in long-term memory and how are they assembled on line in any plausible fashion into syntactic representations? Feature representations are crucial to visual working memory paradigms for obvious reasons, and similarity-based interference models of verbal working memory likewise suggest that retrieval is primarily feature-driven. If this is true, what constraints does this fact impose on syntactic representations? To what extent do existing syntactic theories satisfy these constraints, and could such constraints be used as metrics to differentiate and possibly evaluate theories of syntactic representation against each other?

2017-03-07

Description of visual scenes as well as sentence comprehension, using the Schema Architecture Language-Vision InterAction (SALVIA) cognitive model

Victor Barres USC

+ more

The Schema Architecture Language-Vision InterAction (SALVIA) is a cognitive level model of the dynamic and incremental interactions that take place between the visuo-attentional system and the language production system. By simulating the production of scene descriptions, SALVIA provides an explicit framework to study the coordinated distributed processes that support visual scene apprehension, conceptualization, and grammatical processing leading to utterance formulation. In this presentation I will focus on how SALVIA reframes the psycholinguistic debate regarding the relations between gaze patterns and utterance forms, moving away from a dichotomy between serial modular (Griffin et al. 2000) and interactive views (Gleitman et al. 2007). By modeling simultaneously the impact on the system's dynamics of the type of scene and of the task temporal requirements, the two views become two key points embedded in the more general model’s behavioral space. I will show how this can be shown using the controversial case of the impact of attention capture manipulations, within the Visual World Paradigm experiments, on utterance structure. On the way, I will show, as a preliminary but necessary result, how SALVIA models the impact of time pressure on the quality of utterances produced (measured by their structural compactness and grammatical complexity). As time permits, and to insist both on the necessity to move from a cognitive to a neurocognitive model, as well as on the necessity to move beyond one-sided models of our language cognitive apparatus, I will discuss how SALVIA is extended into a model of language comprehension with this time the additional constraints of simulating key neuropsychology data points (with a focus on agrammatism).

2017-02-28

Interactive Communicative Inference

Larry Muhlstein University of Chicago

+ more

In the search for an understanding of human communication, researchers often try to isolate listener and speaker roles and study them separately. Others claim that it is the intertwinedness of these roles that makes human communication special. This close relationship between listener and speaker has been characterized by concepts such as common ground, backchanneling, and alignment, but they are only part of the picture. Underlying all of these processes, there must be a mechanism that we use to make inferences about our interlocutors’ understanding of words and gestures that allows us to communicate robustly without assuming that we all take the same words to have the same meaning. In this talk, I explore this relationship between language and concepts and propose a mechanism through which communicative interaction can facilitate these latent conceptual inferences. I argue that using this mechanism to augment our understanding of human communication paves the way for a more precise account of the role of interaction in communication.

2017-02-21

Language & Communicative Disorders JDP Anniversary Symposium

Karen Emmorey, Rachel Mayberry, and others UC San Diego/San Diego State University

+ more

2017-02-14

Individual Differences in Children's Learning of Measurement and Chemical Earth Science Concepts

Nancy Stein University of Chicago

+ more

Math and science concepts can be broken down into those that require conceptual understanding without engaging in mathematical calculations and those that require explicit understanding of numerical operations. Both types of knowledge are critical, but success in them is predicted by different cognitive aptitudes. Four different studies are reported, where a total of 420 4th graders were assessed on digit span, spatial ability, and vocabulary-verbal comprehension to explore the role these skills played in the acquisition of mathematical measurement and chemical earth science concepts. Results showed that digit span predicted success on any item requiring numerical processing (correlation between success on measurement items and digit span, r = 0.69). Any concept not requiring numerical operations correlated with digit span at the r = 0.21 level. The more significant correlation for scientific conceptual understanding was with vocabulary and verbal comprehension, correlated at the r = 0.49 level. Digit span not only predicted performance on numerical items, but also predicted the number of repetitions needed to acquire accurate knowledge of multiplication. Similar findings emerged for accurate learning of conceptual content in relation to vocabulary comprehension. Spatial reasoning was not significantly related to either type of item success.

2017-02-07

Language learning, language use, and the evolution of linguistic structure

Kenny Smith University of Edinburgh

+ more

Language is a product of learning in individuals, and universal structural features of language presumably reflect properties of the way in which we learn. But language is not necessarily a direct reflection of properties of individual learners: languages are culturally-transmitted systems, which persist in populations via a repeated cycle of learning and use, where learners learn from linguistic data which represents the communicative behaviour of other individuals who learnt their language in the same way. Languages evolve as a result of their cultural transmission, and are therefore the product of a potentially complex interplay between the biases of human language learners, the communicative functions which language serves, and the ways in which languages are transmitted in populations. In this talk I will present a series of experiments, based around artificial language learning, dyadic interaction and iterated learning paradigms, which allow us to explore the relationship between learning and culture in shaping linguistic structure; I will finish with an experimental study looking at cultural evolution in non-human primates, which suggests that systematic structure may be an inevitable outcome of cultural transmission, rather than a reflection of uniquely human learning biases.

2017-01-31

Metaphor & Emotion: Frames for Dealing with Hardship

Rose Hendricks UC San Diego

+ more

Do metaphors shape people’s emotional states and beliefs about dealing with adversity? Recovery from cancer is one hardship that many people face, and it can be mediated by the way people think about it. We investigate whether two common metaphors for describing a cancer experience – the battle and the journey – encourage people to make different inferences about the patient’s emotional state. I'll also share work looking at the language that people produce after encountering these metaphors, using it as a window into the mental models they construct and the ways they communicate metaphor-laden emotional information. This line of work is still in early stages, so I look forward to your insightful feedback!

2017-01-24

A Neurocomputational Model of the N400 and the P600 in Language Comprehension

Harm Brouwer Saarland University

+ more

Ten years ago, researchers using event-related brain potentials (ERPs) to study language comprehension were puzzled by what looked like a Semantic Illusion: Semantically anomalous, but structurally well-formed sentences did not affect the N400 component — traditionally taken to reflect semantic integration — but instead produced a P600 effect, which is generally linked to syntactic processing. This "Semantic P600"-effect led to a considerable amount of debate, and a number of complex processing models have been proposed as an explanation. What these models have in common is that they postulate two or more separate processing streams, in order to reconcile the Semantic Illusion and other semantically induced P600 effects with the traditional interpretations of the N400 and the P600. In this talk, we will challenge these multi-stream models, and derive a simpler single-steam model, according to which the N400 component reflects the retrieval of word meaning from semantic memory, and the P600 component indexes the integration of this meaning into the unfolding utterance interpretation. We will then instantiate this "Retrieval–Integration (RI)" account as an explicit neurocomputatonal model. This neurocomputational model is the first to successfully simulate N400 and P600 amplitude in language comprehension, and simulations with the model show that it captures N400 and P600 modulations for a wide spectrum of signature processing phenomena, including semantic anomaly, semantic expectancy, syntactic violations, garden-paths, and crucially, constructions evoking a "Semantic P600"-effect.

2017-01-17

A model of Event Knowledge

Jeff Elman UC San Diego

+ more

It has long been recognized that our knowledge of events and situations in the world plays a critical role in our ability to plan our own action and to understand and anticipate the actions of others. This knowledge also provides us with useful data for learning about causal relations in the world. What has not been clear is what the form and structure of this knowledge is, how it is learned, and how it is deployed in real-time. Despite many important theoretical proposals, often using different terminology – schema, scripts, frames, situation models, event knowledge, among others – the lack of a model that addresses these three questions (the form, learning, and deployment of such knowledge) has proved elusive. In this talk I present a connectionist model of event knowledge developed by Ken McRae and myself that attempts to fill this gap. The model simulates a wide range of behaviors that have been observed in humans and seen as reflecting the use of event knowledge. The model also makes testable predictions about behaviors not hitherto observed. The model exhibits a flexibility and robustness in the face of novel situations that resembles that seen in humans. Most importantly, the model’s ability to learn event structure from experience, without prior stipulation, suggests a novel answer to the question ‘What is the form and representation of event knowledge?’

2016-11-29

How much grammar does it take to use a noun? Syntactic effects in bare-noun production and comprehension

Nicholas Lester UCSB Linguistics

+ more

Many psycholinguistic paradigms investigate lexical processing using stimuli or techniques that target single words (lexical decision, picture naming, word naming, etc.). The fruit of this research is offered to explain the structure and flow of information within the mental lexicon. Understandably, these studies do not usually concern themselves with syntax beyond a small set lexical categories (with some empirical support; e.g., La Heij et al., 1998). However, several studies have recently suggested that syntactic information is obligatorily accessed during the processing of individual words (e.g., Baayen, et al., 2011; Cubelli, et al., 2005). These studies have likewise focused on categorical information (e.g., part-of-speech, gender, count/mass), though some recent work has explored lexical variability within a single phrasal construction (e.g., frequency distribution of prepositions across target noun within prepositional phrases). Going further, linguistic theory suggests that distributions across syntactic constructions may also play a role (combinatoric potential; e.g., Branigan & Pickering, 1998). Psycholinguistic support for this notion comes from research on morphosyntactic distributions. For example, Serbian words, which are inflected for grammatical role (among other things), are recognized faster to the extent that (a) they approach uniform probability distribution across case inflections (Moscoso del Prado Martín et al., 2004) and (b) they approach the prototypical probability distribution for words in their inflectional class (Milin et al., 2009).
In this talk, I provide evidence for a new, fully generalized syntactic effect in English lexical processing. I introduce several novel information-theoretic measures of syntactic diversity. These measures tap into both hierarchical asymmetries (heads vs. dependents) and word order. I correlate these measures with response times in several tasks, including picture naming, word naming, and visual lexical decision. Results suggest that syntax supports the processing of individual nouns in both production and comprehension, with a caveat: processing modalities may be tuned to different features of the syntactic distributions. Implications for representational and functional architecture are discussed. So, how much grammar does it take to use a noun? A lot.

2016-11-22

A Neurocomputational Model of Surprisal in Language Comprehension

Matthew Crocker Saarland University

+ more

Surprisal Theory (Hale, 2001; Levy, 2008) asserts that the processing difficulty incurred by a word is inversely proportional to its expectancy – or 'surprisal' – as estimated by probabilistic language models. Such models are limited, however, in that they assume expectancy is determined by linguistic experience alone, making it difficult to accommodate the influence of world, and situational knowledge. To address this limitation, we have developed a neurocomputational model of language processing that seamlessly integrates linguistic experience and probabilistic world knowledge in online comprehension. The model is a simple recurrent network (SRN: Elman, 1990) that is trained to map sentences onto rich probabilistic meaning representations that are derived from a Distributed Situation-state Space (DSS: Frank et al, 2003). Crucially, our DSS representations allow for the computation of online surprisal based on the likelihood of the sentence meaning for the just processed word, given the sentence meaning up until before the word was encountered. We then demonstrate that our `meaning-centric’ characterisation of surprisal provides a more general index of the effort involved in mapping from the linguistic signal to rich and knowledge-driven situation models – capturing not only established surprisal phenomena reflecting linguistic experience, but also offering the potential for surprisal-based explanations for a range of findings that have demonstrated the importance of knowledge-, discourse-, and script-driven influences on processing difficulty.

2016-11-15

Language comprehension in rich visual contexts: combining eye tracking and EEG

Pia Knoeferle Humboldt University Berlin

+ more

Listeners' eye movements to objects during spoken language comprehension have provided good evidence for the view that information from a non-linguistic visual context can rapidly affect syntactic structuring, and this evidence has shaped theories of language comprehension. From this research we have further learnt that the time course of eye movements can reflect distinct comprehension processes (e.g., visual attention to objects is slowed for non-canonical relative to canonical sentence structures). Good evidence that visual context affects distinct syntactic disambiguation and lexical-semantic processes has come, moreover from the analysis of event-related brain potentials (ERPs). However, not all visual context effects seem to tap into distinct comprehension processes (e.g., incongruence between different spatial object depictions and an ensuing sentence results in the same ERP pattern). The present talk reviews the literature on visually situated language comprehension and with a view to future research, I will further outline what theoretically interesting insights we might gain by jointly recording eye-tracking and event-related brain potentials during visually situated language comprehension.

2016-11-08

Aligning generation and parsing

Shota Momma UCSD Psychology

+ more

We use our grammatical knowledge in more than one way. On one hand, we use our grammatical knowledge to speak what we want to say. On the other hand, we use our grammatical knowledge to comprehend what others are saying. In either case, we need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same system. The potential existence of two independent systems of structure building underlying speaking and understanding doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. In this talk, I will discuss whether it is possible to design a single system that does structure building in comprehension, i.e., parsing, and structure building in production, i.e., generation, so that the linking theory between knowledge and performance can also be unified into one. I will discuss both existing and new experimental data pertaining to how sentence structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is plausible.

2016-11-01

Biology and culture in the evolution of rhythm

Andrea Ravignani Vrije Universiteit Brussel

+ more

Many human behaviours, like music and language, show structural regularities, some shared across all cultures and traditions. Why musical universals exist has been object of theoretical speculation but received little empirical attention. Here, by focusing on rhythm, we test the mechanisms underlying musical universals. Human participants are asked to imitate sets of randomly generated drumming sequences, after which their attempts at reproduction become the training set for the next participants in a transmission chain. The structure of drumming patterns, transmitted in independent chains of participants across cultural generations, “evolves” adapting to human biology and cognition. Drumming patterns transmitted within cultures develop into rhythms which are easier to learn, distinctive for each experimental cultural tradition and characterized by all six universals found among world music. Rhythmic structure hence emerges from repeated enhancement of features that adapt to be easily perceived, imitated and transmitted within a culture.

2016-10-25

Beyond brilliant babies and rapid acquisition:
Protracted perceptual learning better explains spoken language development

Sarah Creel UCSD Cognitive Science

+ more

Approaches to lower level language development – sounds and words – have typically focused on the first year of life and shortly beyond. However, the emphasis on rapidity of learning and early sensitivity has yielded only a partial picture of development. New technical and experimental advances have led to a reconceptualization of language development that emphasizes protracted processes of perceptual and associative learning during development, which may undergird more rapid real-time processes that cope with the ambiguity of language in the moment. This dramatic shift in perspective is more than a debate about the speed of learning: by moving to a view of speech and lexical development that extends considerably outside of infancy, new developmental factors—vocabulary, reading, speech production, and social interaction—may come into play, augmenting simple perceptual learning or statistical mechanisms.
I will present research on young (3- to 7-year-old) children’s recognition of speakers, accents, and affective prosody that together suggest the need for a new theoretical approach to perceptual category learning in the speech signal. Seemingly counter to claims of precocious native-language sound sensitivity in infants, my work suggests steady, incremental increases in children’s processing of indexical and paralinguistic information. Importantly, these “nonspeech” aspects of the speech signal—who is talking and what their affective state is—contribute integrally to full-scale adult language comprehension, and influence children's comprehension to the extent that children can access this information. The picture emerging from my work is that spoken language representations develop via a process of slow distributional learning, in combination with slow encoding of associations between sound patterns (voice properties, accent characteristics) and person knowledge.

2016-10-18

Frontal control mechanisms in language production

Stéphanie K. Riès San Diego State University

+ more

Adults fluidly utter 2 to 3 words per second selected from up to 100,000 words in the mental lexicon and only err once every 1000 words. Although seemingly easy, producing language is complex and depends on cognitive control processes that may be shared with non-linguistic cognitive functions. In particular, choosing words cannot be carried out adequately without cognitive control processes. Despite the central importance of our capacity to produce language and the immense personal and societal cost caused by its disruption, the spatio-temporal pattern of activation of the brain regions involved in word selection and the precise role of these brain regions are largely unknown. I will present results from scalp and intracranial electrophysiological studies and neuropsychological studies beginning to shed light on these issues. These results support the hypotheses that posterior inferior left temporal cortex engages in word retrieval as semantic concepts become available. In parallel, medial and left prefrontal cortices tune in with left temporal activity on a trial-by-trial basis, supporting top-down control over interference resolution for word retrieval. Finally, computational modeling of neuropsychological data suggests the left PFC plays a role in the adjustment of the decision threshold for word selection in language production.

2016-10-11

On the relation between syntactic theory and sentence processing, and a theory of island phenomena

William Matchin UCSD Linguistics

+ more

There is currently a seemingly intractable gulf among different syntactic theories. Generative syntactic theories in the Minimalist Program are insightful in providing a theory of the objects of language, tree structures. However, Minimalism does a poor job at explaining real-time sentence processing, child language acquisition, and neuroimaging and neuropsychological data. On the contrary, “lexicalist” grammatical theories (e.g., TAG, Construction grammar, Unification) do a much better job at connecting with real-time sentence processing, language acquisition, and neuroscience. However, lexicalist approaches lack insight into the objects of language: where do these stored structures come from, and why do they have the properties that they do? In this talk I propose a reconciliation between the two approaches along the lines of Frank (2002), by positing a Minimalist grammar as a theory of how structures are generated, and TAG as a theory of the use of these structures during sentence production and comprehension. By making these connections more explicit, it is also possible to incorporate recent insights into the nature of working memory during sentence processing in explaining data traditionally covered by the theory of syntax. I argue that this integrated approach provides more successful insight into the nature of island phenomena than extant grammatical and processing accounts.

2016-10-04

What do you mean, no? Studies in the development of negation

Roman Feiman UCSD Psychology

+ more

The words "no" and "not" have very abstract meanings -- among other things, they can combine with the meanings of other phrases to change the truth-value of a sentence. That they can do this in combination with very diverse semantic content requires that the other representations all be in some common format -- components in what is sometimes called the Language of Thought. Charting the development of logical words and concepts can play a role in constraining theories of how (and if) this format of representation might emerge.
Despite its abstract meaning, "no" is one of the first words kids say. Does this word carry its truth-functional meaning right away, or is it used in a different way by the youngest children? Arguing that prior studies of production cannot answer this question, I will present a line of research examining children's comprehension of the words "no" and "not". We find that, although they produce "no" at 16 months, children do not begin to understand the logical meaning of both "no" and "not" until after they turn two, nearly a year later. Additional eyetracking studies, looking at the online processing of negation, reveal some of the difficulty in constructing representations of negated content, showing separate semantic and pragmatic components.
Why does it take so long for kids to get the logical meaning of "no" from the time they start saying it, and why do they get the meanings of "no" and "not" at the same time? There are two general possibilities -- either the concept is not available for labeling until 24 months, or the word-to-concept mapping is a particularly hard problem to solve. I'll present some ongoing work that looks to disentangle these factors by comparing typical English-learning toddlers to older children adopted from Russia and China who are learning English for the first time, but have greater conceptual resources.

2016-05-31

Innovating a communication system interactively: Negotiation for conventionalization

Ashley Micklos Linguistics and Anthropology, UCLA

+ more

The study I will present demonstrates how interaction – specifically negotiation and repair – can facilitate the emergence, evolution, and conventionalization of a silent gesture communication system (Goldin-Meadow et al, 2008; Schouwstra, 2012). In a modified iterated learning paradigm (Kirby, Cornish, & Smith, 2008), partners communicated noun-verb meanings using only silent gesture. The need to disambiguate similar noun-verb pairs (e.g. “a hammer” and “hammering”) drove these "new" language users to develop a morphology that allowed for quicker processing, easier transmission, and improved accuracy. The specific morphological system that emerged came about through a process of negotiation within the dyad. Negotiation involved reusing elements of prior gestures, even if temporally distant, to communicate a meaning. This is complementary to the same phenomenon that occurs in speech produced over multiple turns (Goodwin, 2013). The face-to-face, contingent interaction of the experiment allows participants to build from one another’s prior gestures as a means of developing systematicity over generations. Transformative operations on prior gestures can emerge through repair as well. Immediate modification on a gesture can involve a reference to the gesture space or a particular element of the gesture. We see examples of this in other-initiated repair sequences (Jefferson, 1974) within the communication game. Over simulated generations, participants modified and systematized prior gestures to conform to emergent conventions in the silent gesture system. By applying a discourse analytic approach to the use of repair in an experimental methodology for language evolution, we are able to determine not only if interaction facilitates the emergence and learnability of a new communication system, but also how interaction affects such a system.

2016-05-24

What are “pronoun reversals” on the autism spectrum and beyond?

David Perlmutter Linguistics, UCSD

+ more

Utterances like (1-2) by children on the autism spectrum and others (with translations into adult English in parentheses) exemplify “pronoun reversal” (PR):
(1) You want ride my back. (‘I want to ride on your back.’) 
(2) (At bedtime:) Me cover you Mommy. (‘You cover me, Mommy.’)
Researchers have treated PR as examples of children’s “errors” and “confusion.” They have focused on tabulating the percentage of such “errors” by children in different populations and at different stages of language acquisition.
This paper seeks a better understanding of PR by focusing on other questions:
(3) What is PR? 
(4) How is it acquired? 
(5) How is it lost? 
(6) Why does it exist?
We argue that PR is not about pronouns.
First, cross-linguistic evidence shows that while person is expressed on pronouns in English, in many other languages it is expressed on verbs. PR is really “person reversal.” We make two cross-linguistic predictions explicit.
Second, we argue that at a more fundamental level, PR is about the kinds of utterances in which PR appears. We distinguish two kinds of utterances: (5) S-clones, which closely approximate (“clone”) the structure (including person) of utterances the child has heard (and how this differs from “imitation”) (6) Independent utterances (“indies”) initiated by the child S-clones predominate in the early production of young children, for whom constructing indies is far more difficult.
The empirical heart of this paper lies in our evidence from two longitudinal case studies of person-reversing children showing that PR predominates in Sclones, adult pronoun usage in indies (even in the same time frame). The data illuminates contrasts between S-clones and indies.
What is PR? Why does it exist? We argue that PR is the expression of person in S-clones. As such, it derives from the source utterances from which S-clones are cloned. PR exists because it is a consequence of children’s S-cloning of heard utterances in the early stages of language acquisition. We show how this provides an account of how PR is acquired and maintained, using data from ASL, Slovenian, and English. As for its loss, our analysis makes a prediction: as children learn to construct indies and the ratio of S-clones to indies in their production declines correspondingly, so will the incidence of PR. The data currently available supports this prediction, but more data is needed to confirm or refute it.
We conclude by noting the potential utility of the concepts “S-clone” and “indie” in the study of language acquisition in general. We speculate that gaining the ability to construct indies, and to do so at an ever-increasing rate, is a significant turning point in the acquisition of language.

2016-05-17

An old-fashioned theory of digital propaganda

Tyler Marghetis Psychological and Brain Sciences, Indiana University

+ more

Sanders’ stump speeches. Family dinner diatribes. Water-cooler screeds. When listening to others, their utterances can reshape our thinking to conform to theirs, sometimes against our will. Such is the power of propaganda. I’d like to consider one possible mechanism of this mind-control: “digital propaganda,” where digital retains its traditional reference to digits, fingers. By digital propaganda, therefore, I mean the use of co-speech gesture to propagate and perpetuate specific beliefs and larger conceptual frameworks.
In this talk, I focus on the propagation of entirely abstract domains—such as math, time, economics, or family relations. First, extending classic work on concrete, literal gestures, we demonstrate that metaphorical gestures can completely reverse interpretations of accompanying abstract speech. This occurs even when the listener is unaware of the source of their interpretation, misremembering gestural information as having been in speech. Next, we show that these metaphorical gestures have downstream effects on subsequent reasoning, mediated by the effect of gesture on interpretation. And we show that digital propaganda isn’t limited to isolated facts but can shape the mental representation of an entire abstract domain. In the spirit of clearing the file-drawer, I end by reporting a rather frustrating experimental failure in which metaphorical gestures had little or no impact on comprehension. (Interpretative help is welcome!) The hands, therefore, are a tool for digital propaganda, spreading abstract beliefs and encompassing frameworks -- at least sometimes.

2016-05-10

Resumptive Pronouns: What can we learn from an ungrammatical construction about grammar, sentence planning, and language acquisition?

Adam Morgan UCSD, Psychology

+ more

"This is an example of a structure that nobody knows why we use it." Resumptive pronouns, like the "it" in the previous sentence, present a problem for standard accounts of grammar. On one hand, English speakers report that they sound bad, which typically indicates ungrammaticality. On the other hand, corpus and experimental work show that English speakers reliably produce resumptive pronouns in certain types of clauses, which seems to imply grammatical knowledge. Furthermore, resumptive pronouns exist and are grammatical in other languages, including Hebrew, Gbadi, and Irish. But if Hebrew- and English-speaking children are exposed to resumptive pronouns, then why does only the former group grammaticize them? In this talk, I will present a series of paired production and acceptability judgment studies whose results indicate that resumptive pronouns in English are a by-product of an early breakdown in production. I will then present pilot data from a production task in Hebrew, and discuss implications for the learnability of a grammatical pattern as a function of its frequency in the language.

2016-05-03

Language to Literacy: The facilitative role of early vocabulary in English

Margaret Friend SDSU Psychology

+ more

The perspective that emerging literacy is dependent upon earlier developing language achievement guides the present paper. Recent large-scale studies have demonstrated a relation between early vocabulary and later language and literacy. Of particular interest are the mechanisms by which vocabulary comprehension in the 2nd year of life might support the acquisition of skills related to kindergarten readiness in the 5th year. Toward this end, we contrast parent report of early vocabulary with a direct, decontextualized assessment. Study 1 assesses the relation between word comprehension in the 2nd year and kindergarten readiness in the 5th year controlling for language proficiency in a group monolingual English children. As expected, decontextualized receptive vocabulary at 22 months emerged as a significant predictor of kindergarten readiness accounting uniquely for 29% of the variance when controlling for parent reported vocabulary, maternal education, and child sex. This effect was fully mediated by decontextualized vocabulary in the 5th year such that concurrent PPVT scores accounted for 34% of the variance when controlling for maternal education, child sex, and early vocabulary. Importantly, early vocabulary significantly predicted PPVT scores accounting for 19% of the variance. Study 2 replicates these findings in a sample of monolingual French children. Finally, Study 3 extends this general pattern of findings to a sample of French-English bilingual children. It is argued that early, decontextualized vocabulary supports subsequent language acquisition which in turn allows children to more readily acquire skills related to emergent literacy and kindergarten readiness.

2016-04-26

How to speak two languages for the price of one

Daniel Kleinman Beckman Institute, University of Illinois

+ more

Bilinguals often switch languages spontaneously even though experimental studies consistently reveal robust switch costs (i.e., it takes more time to respond in a language different than the one used on the previous trial). Do bilinguals always make these spontaneous switches despite the costs, or can switching be cost-free under circumstances that lab tasks don’t capture? I will discuss several picture naming experiments (conducted with collaborator Tamar Gollan) in which bilinguals were instructed to switch languages in such a way that they would only switch when the name of the concept they wanted to express was more accessible in the language they were not currently speaking. These instructions, which constrained bilinguals’ language choices, led them to switch between languages without any cost, and even maintain two languages in readiness as easily as a single language. In contrast, when bilinguals were given full freedom to switch between languages at any time, most opted for less efficient strategies that led to switch costs. These results demonstrate that cost-free language switching and language mixing are possible and that language switching efficiency can be increased by reducing choice.

2016-04-19

An ERP study of predictability and plausibility in sentence processing

Megan Bardolph UCSD, Cognitive Science

+ more

Because of the underlying structure present in language, many models of language processing suggest that people predict not only general semantic content of discourse, but also specific lexical features of upcoming words in sentences. I will present an ERP study that explores the nature of predictability and plausibility in sentence processing. This fine-grained analysis shows how measures of predictability (including sentence constraint, cloze probability, and LSA) and plausibility affect ERP measures of processing, both the N400 and late positivities.

2016-04-12

Gird your loins! A conversation about emotion, embodiment, and swearing

Ben Bergen and Piotr Winkielman UCSD

+ more

Cognitive Science professor and psycholinguist Ben Bergen and Psychology professor and emotion researcher Piotr Winkielman will have a discussion about mind, body, and profanity. Audience participation is encouraged, so please come with questions!

2016-04-05

Why (and when) do speakers talk like each other?

Rachel Ostrand Cognitive Science, UCSD

+ more

During interactive dialogue (conversation) as well as in non-interactive speech (e.g., answering questions or speech shadowing), speakers modify aspects of their speech production to match those of their linguistic partners. Although there have been many demonstrations of this "linguistic alignment" for different (para-)linguistic features (e.g., phonology, word choice, gesture, speech rate), different speakers of a language can vary considerably in such features (e.g., I might speak quickly and you speak slowly, even when saying the same content). Thus, truly comprehensive alignment will require some degree of partner-specific alignment. Does partner-specific alignment arise because speakers can keep track relevant linguistic features independently for different conversational partners? Or is alignment driven by across-the-board (i.e., partner-nonspecific) representations of the distributions of linguistic features? I'll discuss the results of five experiments, which show that when the overall distributions of syntactic constructions is balanced across an experimental session, people do not show partner-specific alignment, even when individual partners produce distinct and systematic syntactic distributions. However, when the overall distribution of syntactic constructions is biased within an experimental session -- across all partners -- speakers do align to that bias. Thus, speakers align to their recent syntactic experience, but only across overall, rather than partner-specific, statistics. Thus, in the syntactic domain (and perhaps in all non-referential domains), any partner-specific alignment that speakers exhibit is seems to be biased based on overall experience, rather than because speakers track and then align to their partners’ statistically biased behaviors in a partner-specific way.

2016-03-08

Mothers’ speech and object naming contingent on infants’ gaze and hand actions

Lucas Chang Cognitive Science, UCSD

+ more

Language input contributes to infants’ learning and predicts their later language outcomes, yet occurs in a dynamic social context. A growing body of research indicates that caregivers’ responsiveness to infants facilitates language acquisition. I will present a longitudinal study of mother-infant interactions that sheds light on how contingent responsiveness makes language accessible to infants. In addition to eliciting a greater volume of maternal speech, infants’ exploratory gaze and hand actions also change the nature of the input to associative learning systems: associations arise not only between self-generated actions and caregiver responses, and between caregiver speech and external referents, but jointly among all these modalities.

2016-03-01

How The Eyes “Read” Sign Language: An Eyetracking Investigation of Children and Adults during Sign Language Processing

Rain Bosworth Psychology, UCSD

+ more

Whether listening to spoken sentences, watching signed sentences, or even reading written sentences, the behaviors that lead to successful language comprehension can be characterized as a developed perceptual skill. Over four prolific decades, Keith Rayner pioneered eyetracking research showing how eye-gaze behavior during reading text and scene perception is affected by perceptual, linguistic, and experiential factors. In comparison, much remains unknown about how signers “read” or “watch” sign language. In this talk, we report progress on recent experiments that were designed to discover correlations amongst measures of gaze behavior, story comprehension, and Age of ASL Acquisition (AoA) in children and adults. Using the 120X Tobii eyetracker, we found that, compared to late and novice signers, early native signers exhibited more focused fixations on the face region and smaller scatter in their gaze space. Remarkably, these mature skilled gaze patterns were already found in our youngest native signers by 3 to 5 years of age. Among adults, smaller vertical gaze space was highly correlated with earlier AoA, better comprehension, and higher lexical recall. This led us to ask whether these focused gaze patterns are merely indicators of high perceptual skills or whether they could also cause better perceptual processing. To test this, we examined a group of novice ASL students who were explicitly instructed to fixate on the face and not move their eyes while watching stories, mimicking the skilled gaze behavior seen in early signers. Eyetracking data showed that their gaze patterns changed according to the instructions, and moreover, that this change resulted in better comprehension accuracy. Current data suggests that age-related changes in passive eye gaze behavior can provide a highly sensitive index of normal sign language processing. We hope to use these findings towards promoting perceptual behaviors that support optimal language processing in deaf signing children.

2016-02-23

Language Research: Theory and Practice

Stephanie Jed Literature, UCSD

+ more

For Galileo, Kepler, Bacon and others, linguistic competence in Latin and Greek was a foundation for scientific research. Without the ability to read and write in a “foreign” language, it was thought in the 16th and 17th centuries, modern scientists would not be able to articulate the epistemological and methodological grounds of their research and discoveries (Westman). Language-learning – and continued exercise in reading and writing – was, therefore, an integral part of scientific training and an integral dimension of scientific creativity and method. Today, however, the learning of a language is generally divided from research on language and the brain. Courses (in linguistics, cognitive science, neuroscience, psychology etc.) that examine linguistic structures, language acquisition, language development, language processing, language perception, language and memory, language and learning, language and the sensory motor system, etc. generally do not offer any practice of advanced language learning. In this presentation, I will ask what may be lost in this disciplinary division. Outlining the proposal of an upper division course that would integrate language-learning with research in embodiment, the sensory motor system, the mirror neuron hypothesis, and other topics, I invite brainstorming and collaboration from the CRL community in the design of a new integrated course in language research - theory and practice.

2016-02-09

Connectionist morphology revisited

Farrell Ackerman and Rob Malouf

+ more

In naturally occurring text, the frequencies of inflected wordforms follow a Zipfian distribution, with a small set of inflected forms occurring frequently and a long tail of forms that are rarely (or never) encountered. For languages with complex inflectional systems (e.g., in the Sino-Tibetan language Khaling, each verb can have up to 331 different forms based on up to ten distinct stems and there are numerous verb classes), most inflected forms of most words will never be observed. Consequently, speakers will necessarily be faced with what Ackerman, et al. (2009) pose as the Paradigm Cell Filling Problem: how do speakers reliably predict unknown inflected forms on the basis of a set of known forms. Recent theoretical approaches to this problem (e.g., Ackerman & Malouf 2013, Bonami & Beniamine 2015, Blevins 2016, Sims 2016, among others) have emphasized the role of implicational relations and analogy, but despite intriguing results concerning information-theoretical principles of paradigm organization, various aspects of learning have proven difficult to formalize. In this talk, we discuss the role that connectionist models of inflection can play in solving the PCFP. Connectionist models of morphological learning inspired a vigorous debate in the 1980's and early 1990's over quite simple morphological phenomena: many theoretical linguists were convinced by Pinker & Prince (1988) and others that connectionist models could not treat morphology as successfully as symbolic analyses in linguistic theory. However, over the past 10 or so years morphological theory has developed beyond familiar morpheme-based perspectives with new word-based models and modern "deep learning" connectionist models capable of identifying new patterns of data and principles conceding complex morphological organization. We will explore some new directions in morphological analysis, with particular attention to some preliminary results in the connectionist learning of complex morphological paradigms.

2016-02-02

Text and discourse validation

Murray Singer

+ more

Beyond the processing of language at the word, sentence, and message levels, there is accumulating evidence that readers engage in the continual validation of message consistency and congruence. I will outline the theoretical framework in which I have investigated this phenomenon. Empirical evidence will be presented pertaining to (a) the basic phenomenon, (a) validation of presupposed versus focused text ideas, and (c) individual differences in validation processing. Validation principles emerging from work in numerous labs will be identified. Strategies for reconciling validation successes and failures will be considered.

2016-01-26

Investigating Children’s Testimonial Learning: Sources of Protection and Vulnerability

Melissa Koenig Institute of Child Development, University of Minnesota

+ more

Much of what we know we learn from what others tell us. My research program examines testimonial learning by focusing on children’s reasoning about sources. In this research, we focus on two kinds of estimates children make about speakers: estimates of their knowledge and their responsibility. Using these two types of estimates, I will discuss sources of protection and vulnerability that characterize children’s learning decisions. First, I will suggest that as soon as children can monitor the truth of a message, they show an interest in assessing the grounds or reasons that speakers have for their claims. Second, I’ll argue that while children are ready to flexibly adjust their epistemic inferences in line with a speaker’s behavior, children’s interpersonal assumptions of responsibility may be more culturally variable, and harder to undermine. Findings will be discussed in relation to categories of protection that are shared with adults, as well as implications for the role that interpersonal trust may play in testimonial learning.

2016-01-12

Resolving Quantity- and Informativeness-implicature in indefinite reference

Till Poppels Linguistics, UCSD

+ more

A central challenge for all theories of conversational implicature (Grice, 1957, 1975) is characterizing the fundamental tension between Quantity (Q) implicature, in which utterance meaning is refined through exclusion of the meanings of alternative utterances, and Informativeness (I) implicature, in which utterance meaning is refined by strengthening to the prototypical case (Atlas & Levinson, 1981; Levinson, 2000). Here we report a large-scale experimental investigation of Q-I resolution in cases of semantically underspecified indefinite reference. We found strong support for five predictions, strengthening the case for recent rational speaker models of conversational implicature (Frank & Goodman, 2012; Degen, Franke, & Jäger, 2013): interpretational preferences were affected by (i) subjective prior probabilities (Informativeness), (ii) the polarity and (iii) the magnitude of utterance cost differentials (Quantity), (iv) the felicity conditions of indefinite NPs in English, and (v) the ‘relatability’ of X and Y.

2015-12-01

Emergence of space-time mappings in communication: Initial biases and cultural evolution

Esther Walker and Tessa Verhoef Cognitive Science, UCSD

+ more

Humans spatialize time. This occurs in artifacts like timelines, in spontaneous gestures, and in conventional language ("think BACK to the summer"). These links between space and time, moreover, exist both as associations in individual minds and as shared, cultural systems that transcend individuals. Understanding the origins of this "tangle of space and time" will require analyses at multiple levels, from initial individual biases, to local cultural norms, to cultural evolution (Núñez and Cooperrider, 2013). Where do these space-time links come from, and how are individual biases related to cultural norms?
Here we present a series of laboratory experiments using methods from the field of Language Evolution to simulate the cultural emergence of space-time mappings. In a first communication game experiment, dyads had to communicate about temporal concepts using only a novel, spatial signaling device. Over the course of their interaction, participants rapidly established semiotic systems that mapped systematically between time and space, reflecting both improvisation and social coordination. These semiotic systems exhibited a number of similarities -- but also striking idiosyncrasies. Ongoing research is investigating how these initial systems will change as they are propagated repeatedly. We predict that cultural transmission across multiple "generations" will produce increasingly regular and stable semiotic systems, systems that entrench and reproduce both shared biases and idiosyncratic "historical accidents." By foregrounding the interaction of mechanisms that operate on disparate timescales, laboratory experiments can shed light on the commonalities and variety found in space-time mappings in languages around the world.

2015-11-24

Lateralization of the N170 for word and face processing in deaf signers

Zed Sevcikova Sehyr School of Speech, Language, and Hearing Sciences, SDSU

+ more

Left-lateralization for words develops before right-lateralization for faces, and hemispheric specialization for faces may be contingent upon prior lateralization for words (Dundas, Plaut & Behrmann, 2014). We examined the relationship between word and face processing for deaf native users of American Sign Language who have distinct developmental experiences with both words and faces (e.g., the face conveys linguistic information). We investigated whether hemispheric organization of word and face recognition (indexed by lateralization of the N170) is uniquely shaped by sign language experience. Hearing non-signers and deaf signers made same-different judgments to pairs of words or faces (192 trials each), where the first stimulus was presented centrally and the second was presented to either the left (LH) or right hemisphere (RH). EEG was recorded to the centrally presented stimulus and referenced to the average of all 32 electrode sites. We observed a similar pattern of N170 laterality for deaf and hearing participants, but with a different scalp distribution. For both groups, the N170 to words was larger at LH occipital sites, but only hearing participants also showed a larger N170 at LH temporal sites. For faces, deaf signers showed a larger N170 response at RH temporal sites, with a weaker amplitude difference at occipital sites. Hearing participants showed a similar RH lateralized response over both temporal and occipital sites. Thus, lateralization for words and faces appears similar for deaf and hearing individuals, but differences in scalp distribution may reflect unique organization of visual pathways in the occipitotemporal cortex for deaf signers.

2015-11-17

Pushing the boundary of parafoveal processing in reading

Mallorie Leinenger & Liz Schotter Psychology, UCSD

+ more

When we read, we look directly at (i.e., foveate) a word while at the same time obtaining a preview of the word(s) to come, in parafoveal vision. The current theory of reading is that parafoveal processing is used to facilitate subsequent foveal processing. That is, fixation durations on the subsequent foveal target word are shorter when the reader had an accurate (i.e., identical) parafoveal preview of that word than when the preview stimulus had been replaced with something else (i.e., in a gaze-contingent display change paradigm; Rayner, 1975). The presumed mechanism for this facilitated processing is integration of parafoveal preview and foveal target information across saccades, which is easier when the two words are similar. However, we suggest that there are cases in which processing of the parafoveal preview can directly influence fixation behavior on the foveal target, even in the absence of similarity between preview and target. Thus, we hypothesize that, if easy to process, the preview stimulus can be used to pre-initiate future eye movement programs, leading to fairly short fixations on any target stimulus. In this talk, we describe two experiments that find evidence for this alternative hypothesis and we explain how these effects may be accommodated by an existing model of oculomotor control in reading.

2015-11-10

Conceptual Integration and Multimodal Discourse Comprehension

Seana Coulson Cognitive Science, UCSD

+ more

In face to face conversation, understanding one another involves integrating information activated by our interlocutors' speech with that activated by their gestures. I will discuss a series of studies from my lab that have explored the cognitive processes underlying speech-gesture integration. These studies indicate the importance of visuo-spatial working memory resources for understanding co-speech iconic gestures.

2015-11-03

Word learning amidst phonemic variability

Conor Frye Cognitive Science, UCSD

+ more

There is a major assumption that a language learner’s initial goal is to detect specific sound categories in that language, and that these sound categories and their perceptual boundaries are fairly fixed in adulthood. These studies and theoretical accounts imply that learners should no longer be able to learn phonemically variable words as the same word—for example, that div and tiv are equivalent labels for a novel concept. We provide evidence that categories are much more plastic and can be modified and merged, even in adulthood, and that exposure to different probability distributions alters functional phoneme boundaries nearly immediately. Such malleability challenges the psychological relevance of phonemes for learning and recognizing words, and argues against the primacy of the phoneme in word representations in favor of a more probabilistic definition of word and speech sound identity.

2015-10-27

Iconicity, naturalness and systematicity in the emergence of sign language structure

Tessa Verhoef, Carol Padden, and Simon Kirby Center for Research in Language, UCSD

+ more

Systematic preferences have been found for the use of different iconic strategies for naming man-made hand-held tools (Padden et al., 2014) in both sign and gesture: HANDLING (showing how you hold it) and INSTRUMENT (showing what it looks like) forms are most frequently used. Within those two, sign languages vary in their use of one strategy over the other (Padden et al., 2013). Such lexical preferences across different sign languages provide an ideal test case for understanding the emergence of conventions in language in which multiple types of bias are at play. Specifically, we argue that there may be distinct biases operating during production and interpretation of single signs on the one hand, and learning a conventional system of signs on the other. It is crucial we understand how these distinct biases interact if we are to explain the emergence of systematicity in a linguistic system with iconic underpinnings. We present three experiments that together help to form a picture of the interplay between naturalness, iconicity and systematicity in the origin of linguistic signals. The first experiment maps out people's initial natural biases towards the two strategies for naming tools, the second investigates the effects of these biases on the learnability of artificial languages and the third tests the flexibility of participant’s biases when they are exposed to specific types of data. Our results show that non-signers quickly detect patterns for which they need to categorize abstract iconic gesture strategies, while there is a subtle interplay between learning biases and natural mapping biases. Natural mapping biases seem to strongly influence one-off judgments on individual items while a bias for systematicity takes effect once there is exposure to sets of structured data.

2015-10-20

Measuring Conventionalization in the Manual Modality

Savithry Namboodiripad, Dan Lenzen, Ryan Lepic, and Tessa Verhoef Linguistics, UCSD

+ more

Gestures produced by users of spoken languages differ from signs produced by users of sign languages in that gestures are more typically ad hoc and idiosyncratic, while signs are more typically conventionalized and shared within a language community. To study how gestures may change over time as a result of the process of conventionalization, we designed a social coordination game to elicit repeated silent gestures from hearing nonsigners, and used Microsoft Kinect to unobtrusively track the movement of their bodies as they gestured (following Lenzen, 2015). Our approach follows both a tradition of lab experiments designed to study social coordination and transmission in the emergence of linguistic structure (Schouwstra et al., 2014) and insights from sign language research on language emergence. Working with silent gesture, we were able to simulate and quantify effects of conventionalization that have been described for sign languages (Frishberg, 1975), including changes in efficiency of communication and size of articulatory space, in the laboratory.With Kinect we were able to measure changes in gesture that are also the hallmarks of conventionalization in sign language. This approach opens the door for more direct future comparisons between ad hoc gestures produced in the lab with natural sign languages in the world.

2015-10-13

Pronominal ambiguity resolution in Japanese benefactive constructions

Kentaro Nakatani Linguistics, UCSD/Konan University

+ more

Japanese benefactive constructions ("do something for somebody") usually involve auxiliary uses of verbs of giving. Because Japanese has two contrastive giving verbs, kureru 'give (to the speaker)' and ageru 'give (to a non-speaker)' (which are sort of transitive versions of coming and going), the corresponding two types of benefactive constructions can be constructed, depending on who the beneficiary is. This feature usually does good to the processing of Japanese, a massively pro-drop language, because null arguments can be recovered by the choice of these benefactive verbs. Things could get complicated, however, when the existence of an adjunct clause combined with the use of null pronouns leads the comprehender to a specific resolution of referential ambiguity of these null pronouns, and then it eventually turns out that this resolution contradicts the interpretive requirements of the benefactive verbs (i.e., who the beneficiary should be).
While the previous studies have pointed out the processing load (supposedly) triggered by a structural reanalysis in such benefactive constructions, what has been overlooked is the effect of pragmatic inferences made between the embedded adjunct clause and the main clause. In this study, I will show that the inter-eventive pragmatic inferences affect the ease of comprehension in the opposite direction depending on the choice of the benefactive verbs, reporting the results from 2 self-paced reading experiments and a forced-choice query.

2015-10-06

Repetition and information flow in music and language

Davy Temperley Music Theory, Eastman School of Music

+ more

In the first part of this talk I will report on some recent research on the use of repetition in language and music. A corpus analysis of classical melodies shows that, when a melodic pattern is repeated with an alteration, the alteration tends to lower the probability of the pattern - for example, by introducing larger intervals or chromatic notes (notes outside the scale). A corpus analysis of written English text shows a similar pattern: in coordinate noun-phrase constructions in which the first and second phrases match syntactically (e.g. "the black dog and the white cat"), the second phrase tends to have lower lexical (trigram) probabilities than the first. A further pattern is also observed in coordinate constructions in language: the tendency towards "parallelism" (syntactic matching between the first and second coordinate phrases) is much stronger for rare constructions than for common ones (the "inverse frequency effect"). (There is some evidence for this phenomenon in music as well.) I will suggest that these phenomena can be explained by Levy and Jaeger's theory of Uniform Information Density (UID): repetition is used to smooth out the "spikes" in information created by rare events.
In the second part of the talk I will focus further on the inverse frequency effect, and suggest another factor that may be behind it besides UID. I will argue that it may facilitate sentence processing, by constraining the use of rare syntactic constructions to certain situations - essentially, situations in which they are repeated. This helps to contain the combinatorial explosion of possible analyses that must be considered in sentence processing. I will relate this to another type of rare syntactic construction, "main clause phenomena" - constructions that occur only (or predominantly) at the beginning of a main clause, such as participle preposing and NP topicalization. This, too, can be explained in processing terms: since processing the beginning of a sentence requires little combinatorial search, it is natural that a greater variety of constructions would be allowed there.

2015-06-02

New space-time metaphors foster new mental representations for time

Rose Hendricks Cognitive Science, UCSD

+ more

Do linguistic metaphors give rise to non-linguistic representations? If so, then learning a new way of talking about time should foster new ways of thinking about it. We describe a set of studies in which we trained English-speaking participants to talk about time using vertical spatial metaphors that are novel to English. One group learned a mapping that placed earlier events above and the other a mapping that placed earlier events below. After mastering the new metaphors, participants were tested in a non-linguistic implicit space-time association task – the Orly task. This task has been used previously to document cross-linguistic differences in representations of time (Boroditsky et. al 2010; Fuhrman et al 2011). Some participants completed temporal judgments in the Orly task without any other secondary task, while others did so under either verbal or visual interference. Overall, the system of metaphors that participants were trained on influenced their performance on the Orly task and did not differ among the three interference conditions, although the effect did not reach significance for participants in the verbal interference condition. This suggests that as a result of learning a new metaphor, people developed new implicit metaphor-consistent ways of thinking about time. Finally, a serendipitous sample of Chinese-English bilinguals, who are already familiar with vertical metaphors for time, provided us with the opportunity to investigate what happens when natural language metaphors and newly acquired ones conflict. These participants demonstrated a combination effect, in which both long-term and immediate experience shaped their thinking. I'll share the work that has been done on this project and the directions we hope to pursue going forward.

2015-05-26

Feshing fur phonims: Learning words amidst phonemic variability

Conor Frye Cognitive Science, UCSD

+ more

There is a major assumption that a language learner’s initial goal is to detect specific sound categories in that language, and that these sound categories and their perceptual boundaries are fairly fixed in adulthood. These studies and theoretical accounts imply that learners should no longer be able to learn phonemically-differing words as the same word—for example, that paff and baff are equivalent labels for a novel concept. We provide evidence that categories are much more plastic and can be modified and merged, even in adulthood, and that exposure to different probability distributions alters functional phoneme boundaries. Such malleability challenges the psychological relevance of phonemes for learning and recognizing words, and argues against the primacy of the phoneme in word representations in favor of a more probabilistic definition of word and speech sound identity.

2015-05-19

Studying plasticity for speech perception in the brain: False starts and new trails

Jason Zevin Psychology and Linguistics, USC

+ more

People typically and spectacularly fail to master the speech sound categories of a second language (L2) in adulthood. In searching for the neural basis of this phenomenon, we have begun to suspect that the neural indices of difficulties in adult L2 speech perception reflect the behavioral relevance of the stimuli, rather than any basic perceptual function relevant to stimulus categorization. I will present evidence for this interpretation, followed by some proposals for what to do about it. One strategy is to focus on how people succeed in understanding L2 speech rather than their failure to categorize speech sounds in ostensibly neutral experimental contexts. We can look, for example, at correlations in brain activity while people listen to discourses of varying lengths. Or we can look at the dynamics of word recognition in simulated communicative contexts. I will be presenting some data from our first steps in these directions.

2015-05-12

What You See Isn't Always What You Get

Rachel Ostrand Cognitive Science, UCSD

+ more

Human speech perception often includes both auditory (the speaker's voice) and visual (the speaker's mouth movements) components. Although these two sensory signals necessarily enter the brain separately through different perceptual channels, they end up being integrated into a single perception of speech. An extreme example of this integration is the McGurk Effect, in which the auditory and visual signals conflict and the listener perceives a fusion of the two differing components. My research addresses when this auditory-visual integration occurs: before or after lexical access. Namely, does the visual information that is integrated into the (more reliable) auditory signal have any influence over which word gets activated in the lexicon, or does it merely contribute to a clearer perceptual experience? Which signal is used to access the lexicon to identify the word a listener just perceived - the integrated auditory-visual percept, or the raw auditory signal? If it's the former, then the visual information of a speaker's mouth movements fundamentally influences how you perceive speech. If it's the latter, then when you fall prey to the McGurk Effect (or are in a noisy bar), although you perceive one word, you lexically access another. Or maybe it's both?!

2015-04-28

Two methodological principles of phonological analysis

Eric Bakovic Linguistics, UCSD

+ more

The phonological forms of morphemes often alternate systematically, depending on context. The methodological starting point of (generative) phonological analysis is to posit unique underlying mental representations for alternating morphemes, consisting of the same basic units of analysis as their systematically alternating surface representations, and to derive those systematically alternating surface representations using context-sensitive transformations.
Two further methodological principles, the Distribution Principle and the Reckonability Principle, come into play in deciding what the correct underlying representation of a morpheme is. In this talk I define these two principles and describe how they are used in phonological analysis. I focus in particular on a fundamental difference between the two principles: the Distribution Principle follows as a necessary consequence of the methodological starting point identified above, whereas the Reckonability Principle satisfies criteria of formal simplicity and makes an independent contribution only when the Distribution Principle is not applicable.
It is rarely if ever the case that these two methodological principles come into conflict in an analysis of actual phonological data, but the difference between them entails that the Distribution Principle will trump the Reckonability Principle if they ever were to conflict. I present analyses of a prototypical case in two theoretical models, one (Harmonic Grammar) predicting that the conflict is instantiable and the other (Optimality Theory) predicting that it is not, and discuss the potential significance of the apparent fact that the conflict is not (robustly) instantiated in actual phonologies.

2015-04-21

Give me a quick hug! Event representations are modulated by choice of grammatical construction

Eva Wittenberg Center for Research in Language, UCSD

+ more

When you talk about a particular event, grammar gives you lots of options. You can use different verb forms, active or passive, topicalizations, or other grammatical devices to highlight, modulate, include or exclude very subtle aspects of the event description. I will be presenting a special case of grammatical choice: light verb constructions, like „Charles gave Julius a hug“, their base verb construction counterparts, like „Charles hugged Julius“, and non-light, syntactically similar constructions, like „Charles gave Julius a book". With data from several experiments, I will show that light verb constructions are not only processed differently from other constructions, but that they also evoke very particular event representations, modulating not only the processing of thematic roles, but also imagined event durations.

2015-04-14

Poverty, dialect, and the “Achievement Gap”

Mark Seidenberg University of Wisconsin, Madison

+ more

Research in cognitive and developmental psychology and in cognitive neuroscience has made enormous progress toward understanding skilled reading, the acquisition of reading skill, the brain bases of reading, and the causes and treatment of reading impairments. The focus of my talk (and a forthcoming book) is this question: if the science is so advanced, why do so many people read so poorly? Everyone knows that when it comes to reading, the US is a chronic underachiever. Literacy levels in the US are low compared to other countries with fewer economic resources. About 30% of the US population has only basic reading skills, and the percentages are higher among lower income and minority groups. I’ll examine arguments by Diane Ravitch and others that attribute poor reading achievement in the US to poverty, and present recent behavioral and modeling evidence concerning the role of language variation—dialect—in the black-white achievement gap in reading. I will suggest that there are opportunities to increase literacy levels by making better use of what we have learned about reading and language but also institutional obstacles and understudied issues for which more evidence is badly needed.

2015-04-07

Does verbal description enhance memory for the taste of wine?

Rachel Bristol & Seana Coulson Cognitive Science, UCSD

+ more

We will ask participants to sample wine and either describe their perceptual experience or perform a control task. Memory for these experiences will be informally tested to examine the impact (if any) of verbal description. Besides wine, a variety of tasty snacks and non-alcoholic beverages will be available for consumption. Attendees are encouraged to engage in social interaction so as to promote a naturalistic environment for participants. This event will begin at 3:30 and last until 5pm and attendees are welcome to arrive late or leave early.

2015-03-31

Fail fast or succeed slowly: Good-enough processing can mask interference effects

Bruno Nicenboim Potsdam University

+ more

In memory research, similarity-based interference refers to the impaired ability to remember an item when it is similar to other items stored in memory (Anderson & Neely, 1996). Interference has been shown to be also relevant to language comprehension processes. On a cue-based retrieval account (Van Dyke & Lewis, 2003; Lewis & Vasishth, 2005), grammatical heads such as verbs provide retrieval cues that are used to distinguish between the target item and competitors in memory. Similarity-based interference occurs when items share cues (such as number, syntactic category, etc), which makes it harder to distinguish between them, causing both longer reading times (RTs) and lower question-response accuracy. Since lower accuracy could be the result from either incorrectly retrieving a competitor or simply failing to complete a retrieval (an unstarted or aborted process), it is unclear how RTs are related to question-response accuracy. We conducted a self-paced reading experiment that investigated interference effects in subject-verb dependencies in German. We found the expected retrieval interference effect: longer RTs as well as lower accuracy in high interference conditions vs. low interference ones. In addition, we fitted hierarchical multinomial processing trees (MPT, Riefer and Batchelder, 1988; Matzke et al., 2013) using Stan Modeling Language to estimate the latent parameters underlying the comprehension accuracy: probability of any retrieval, probability of a correct retrieval, and bias to guess Yes (in comparison to No). We show that the estimates of the underlying parameters can uncover a complex relationship between accuracy and RTs: high interference causes longer RTs at successful retrievals, but it also causes a higher proportion of incomplete retrievals that lead in turn to lower accuracy and shorter RTs.

2015-03-17

The grammar of emotions: Word order, particles, and emphasis

Andreas Trotzke Linguistics, University of Konstanz

+ more

In this talk, I provide evidence for a pragmatic notion of emphasis that is closely related to mirativity, a kind of evidentiality marking by which an utterance is marked as conveying information that is unexpected or surprising to the speaker. Certain options of German word order, sometimes in combination with discourse particles, yield an emphatic character that is typical for the expressive side of utterances and endows them with an exclamative flavor. Cross-linguistic evidence offers good reasons to assume that (at least certain forms of) emphatic marking must be distinguished from information structure. I introduce a new phenomenon in this context, namely cases of co-constituency of discourse particles and wh-elements in the left periphery of the clause. I argue that this construction shows several features of emphasis, and I substantiate my claim by a phonetic experiment that investigates whether the construction shows some of the core characteristics of emotive speech.

2015-03-10

Cross-cultural diversity in narrative structure: Towards a linguistic typological approach to visual narrative

Neil Cohn Department of Cognitive Science, UCSD

+ more

While extensive research has studied the structure of language and verbal discourse, only recently has cognitive science turned towards investigating the structure of visual narratives like those found in comics. This work on the “narrative grammar” of sequential images has identified several structural patterns in visual narratives. To examine the extent of these patterns in actual narrative systems, we examined a corpus of roughly 160 comics from across the world (American comics, Japanese manga, Korean manhwa, OEL manga, French bande desinée, and German comics) constituting approximately 18,000 panels. Our analysis will show that visual narratives differ between cultures in systematic ways across several dimensions, including linear semantic coherence relations between images, the attentional framing of scenes, and the narrative constructions used in sequential images. However, these patterns are not restricted to geographic boundaries, but rather to the narrative systems used across authors of a common “style.” That is, these findings will suggest that different systematic narrative grammars are used by “visual languages” used in comics across the world and that underlying typological principles may underlie the structure of narrative systems cross-culturally.

2015-03-03

Pragmatic strategies for efficient communication

Leon Bergen Brain and Cognitive Sciences, MIT

+ more

Pragmatic reasoning allows people to adapt their language to better fit their communicative goals. Consider scalar implicatures, e.g. the inference that "Some of the students passed the test" means that not all of them passed. Without this pragmatic strengthening, the only way that a speaker could communicate this meaning is by using the longer and clumsier phrase, "Some but not all." The speaker in this example can be confident that the listener will draw the correct inference, because they share a simple maxim of conversation: be informative. If the speaker had known that all of the students had passed, then saying "All" would have been more informative than saying "Some"; the listener can therefore conclude that not all of the students passed. This type of Gricean reasoning has recently been formalized in models of recursive social reasoning (Franke, 2009; Frank and Goodman, 2012; Jager, 2012), and used to predict quantitative judgments in pragmatic reasoning tasks.

I will discuss recent work on pragmatic inferences which require more than just the assumption of speaker informativeness. This includes a diverse set of phenomena, several of which have not previously been thought to be pragmatic in nature: exaggeration and metaphor, focus effects from prosodic stress, quantifier scope inversion, and embedded implicatures. Drawing on experimental evidence and computational modeling, I will argue that each of these phenomena corresponds to a natural way of augmenting pragmatic reasoning with additional knowledge about the world or the structure of social intentions. These phenomena illustrate both the sophistication of people's pragmatic reasoning, and how people leverage this reasoning to improve the efficiency of their language use.

2015-02-24

How cultural evolution gives us linguistic structure

Simon Kirby Language Evolution and Computation, University of Edinburgh

+ more

Evolutionary linguists attempt to explain the origins of the fundamental design features of human language, such as duality of patterning, compositionality or recursion. I will argue that these system-wide properties of language are the result of cultural evolution. We can recreate this process of cultural evolution in the experiment lab and observe closely how structure emerges from randomness as miniature languages are passed down through chains of participants by iterated learning.

I will present two such experiments, one in the gestural modality showing the emergence of conventionalised sign from iconic pantomime, and one using an artificial language learning and interaction task. These experiments show that, contrary to initial expectations, the emergence of structure is not inevitable, but relies on a trade-off between pressures from learning and pressures from communication. I will end the talk by arguing that these results provide a unifying explanation for why complexity in languages appears to correlate inversely with number of speakers, and why Al-Sayyid Bedouin Sign Language appears to lack duality of patterning.

2015-02-17

Form, meaning, structure, iconicity

Bart de Boer Artificial Intelligence Lab, Vrije Universiteit Brussel

+ more

This talk explores the relation between structure and iconicity with a combination of computer models and experiments. Iconic structure is a systematic mapping between form and meaning. This may be of influence on how easily signals are learned and understood, and it has been hypothesized that it may have played an important role in early language evolution. However, modern languages make relatively little use of it, and it is a mystery how (evolutionarily) early language has made the tranistion from iconic to conventionalized, structured systems of signals. I will first present a brief introduction to what it means for signals to be iconic and what problems iconic signals pose for a theory of language evolution. I will then present a model of how a transition from iconic to structured signals could take place, as well as preliminary experimental results on whether the model fits human behavior.

2015-02-10

The evolutionary origins of human communication and language

Thomas Scott-Phillips Evolutionary and Cognitive Anthropology, Durham University, UK

+ more

Linguistic communication is arguably humanity's most distinctive characteristic. Why are we the only species that communicates in this way? In this talk, based upon my recent book (Speaking Our Minds, Palgrave Macmillan), I will argue that the differences between human communication and the communication systems of all other species is likely not a difference of degree, but rather one of kind. Linguistic communication is made possible by mechanisms of metapsychology, and expressively powerful by mechanisms of association. In contrast, non-human primate communication is most likely the opposite: made possible by mechanisms of association, and expressively powerful by mechanisms of metapsychology. This conclusion suggests that human communication, and hence linguistic communication, evolved as a by-product of increased social intelligence. As such, human communication may be best seen, from an evolutionary perspective, as a particularly sophisticated form of social cognition: mutually-assisted mindreading and mental manipulation. More generally, I will highlight the often neglected importance of pragmatics for the study of language origins.

2015-02-03

On the Evolution of Combinatorial Phonological Structure within the Word: Sign Language Evidence

David Perlmutter Linguistics, UCSD

+ more

Human languages, spoken and signed, have combinatorial systems that combine meaningless smaller units to form words or signs. In spoken languages the smaller units are the sounds of speech (phonemes). In sign languages they are handshapes, movements, and the places on the body where signs are made. These constitute phonological structure. Building structure by combining smaller units, phonological structure in both spoken and signed languages is combinatorial. This paper addresses the evolution of combinatorial phonological structure.

Phonological combinatoriality evolved in spoken languages too long ago to be traced. In sign languages that evolution is much more recent and therefore more amenable to study. We argue that signs with combinatorial phonological structure evolved from holistic gestures that lack such structure, tracing the steps in that evolution. We therefore highlight contrasts between signs, products of that evolution, and holistic gestures, from which they evolved.

Combinatoriality gives signs smaller parts whose properties (phonological features) determine how the signs are pronounced. These features surprisingly predict that although signs may resemble the iconic gestures from which they evolved, signs can have anti-iconic pronunciations. Data from American Sign Language (ASL) confirm this prediction.

Since signs’ pronunciation is determined by phonological features of their smaller parts, in a new sign language that has not yet evolved combinatorial phonological structure, there will be no features to constrain signs’ pronunciation. This predicts that in such a language, pronunciation can vary considerably from one signer to another. This prediction is confirmed by data from Al-Sayyid Bedouin Sign Language (ABSL), a newly emerging sign language.

In addition, we briefly present evidence that chimpanzees exposed to ASL for years learned only a small number of holistic gestures, not the combinatorial sign system learned by signers of ASL. This is explained if humans’ combinatorial abilities that are needed to learn the vocabulary of a human language evolved after the human and chimpanzee lineages diverged.

2015-01-27

Using text to build predictive models of opinions, networks, and social media

Julian McAuley Computer Science and Engineering, UCSD

+ more

Text is an incredibly rich source of data to build reliable models of human behavior and opinions. Consider tasks such as predicting ratings on Netflix, estimating which pair of jeans is better on Amazon, or predicting which content will "go viral" on Reddit. While such problems have traditionally been approached without considering textual data, in this talk we'll show how models that incorporate text can not only produce more accurate predictions, but can also augment those predictions with interpretable explanations. To achieve this we propose a framework to learn joint embeddings of structured data (e.g. ratings) and text, such that the variation in the former can be explained by (and predicted from) the latter.

2015-01-20

The Organization and Structure of Concepts in Semantic Memory

Ken McRae Department of Psychology, University of Western Ontario

+ more

People use concepts and word meaning every day to recognize entities and objects in their environment, to anticipate how entities will behave and interact with each other, to know how objects should be used, to generate expectancies for situations, and to understand language. Over the years, a number of theories have been presented regarding how concepts are organized and structured in semantic memory. For example, various theories stress that concepts (or lexical items) are linked by undifferentiated associations. Other theories stress hierarchical categorical (taxonomic) structure, whereas still others focus on similarity among concepts. In this talk, I will present evidence that people’s knowledge of real-world situations is an important factor underlying the organization and structure of concepts in semantic memory. I will present experiments spanning word, picture, and discourse processing. Evidence for the importance of situation-based knowledge will cover a number of types of concepts, including verbs, nouns denoting living and nonliving things, other types of relatively concrete noun concepts, and abstract concepts. I will conclude that semantic memory is structured in our mind so that the computation and use of knowledge of real-world situations is both rapid and fundamental.

2015-01-13

Interaction's role in emerging communication systems and their conventionalization: Repair as a means for the fixation of form-meaning matches

Ashley Micklos UCLA

+ more

Interaction is an inherent aspect of human language use, allowing us to build communication through varied resources, negotiate meanings, and pass down practices of the community. The research presented here address the nature and role of interactional discourse features, namely repair, eye gaze, and turn-taking, in an experimental language evolution setting in which dyads must disambiguate minimally contrastive noun and verb targets using only silent gesture. Here, using a conversation analytic approach, we see how an emerging silent gesture system is negotiated, changed, and conventionalized in dyadic interactions, and how these processes are changed and transmitted over simulated generations. For example, the strategies for and frequency of repair may be indicative of the stage of evolution/conventionalization of a given language system. Furthermore, particular repair strategies may even promote the fixation of certain gestural forms for marking either noun-ness or verb-ness. The data also suggest a cultural preference for certain discourse strategies, which are culturally transmitted along with the linguistic system.

2014-12-09

The unrealized promise of cross-situational word-referent learning

Linda Smith Department of Psychology & Brain Science, Indiana University

+ more

Recent theory and experiments offer a new solution as to how infant learners may break into word learning, by using cross-situational statistics to find the underlying word-referent mappings. Computational models demonstrate the in-principle plausibility of this statistical learning solution and experimental evidence shows that infants can aggregate and make statistically appropriate decisions from word-referent co-occurrence data. This talk considers arguments and evidence against cross-situational learning as a fundamental mechanism and the gaps in current knowledge that prevent a confident conclusion about whether cross-situational learning is the mechanism through which infants break into word learning. I will present very new evidence (and theoretical ideas) suggesting we need to different empirical questions.

2014-12-02

Gesture Tracking for the Investigation of Syntactic Ambiguity and Cognitive Processing

Hunter Hatfield Department of Linguistics, University of Otago

+ more

Innovations in methodology can be as important to scientific progress as innovations in theory. The Otago PsyAn Lab (OPAL) experimental platform is an open-source set of tools allowing a researcher to design and conduct experiments in a native AndroidTM environment using touchscreen devices. In experiment one, syntactic processing of well-studied phenomena is investigated. In a Self-Guided Reading task, a novel method introduced in this research, participants read sentences by underlining masked text using a finger. The location of the finger was tracked character-by-character. Growth curve analysis revealed significant differences between curves for all sets of stimuli. Moreover, the location of the change in behaviour was at the predicted location in the sentence, which is not consistently revealed by other methodologies. In experiment 2, object and subject relative clauses were investigated. Intriguingly, the point at which the sentence types diverged was earlier than documented using Self-Paced Reading and more in support of Surprisal theories of processing then Locality theories. This research is placed in a broader context of the merits and drawbacks of the touchscreen methods and plans for work beyond just syntactic ambiguity.

2014-11-25

Hemispheric Differences in Activating Event Knowledge During Language Comprehension

Ross Metusalem Department of Cognitive Science, UCSD

+ more

Discourse comprehension often entails inferring unstated elements of described scenarios or events through activation of relevant knowledge in long-term memory. Metusalem et al. (2012) examined the degree to which unstated event knowledge elements are activated during incremental comprehension, specifically at points in a sentence at which those elements would constitute semantic anomalies. Using the event-related brain potential (ERP) method, they found that words that violate the local semantic context but align with the described event elicit a reduced N400 compared to equally anomalous words that do not align with the event. This N400 pattern was argued to indicate that real-time activation of event knowledge occurs with at least partial independence from the immediate sentential context.

The present study addresses contributions of the two cerebral hemispheres to the effect observed by Metusalem et al. While the left hemisphere (LH) has been argued to support expectations for upcoming words through semantic feature pre-activation, the right hemisphere (RH) has been shown to activate concepts beyond those that would constitute expected continuations of the sentence in support of discourse-pragmatic processes. It was therefore hypothesized that RH activity may be driving much, if not all, of the difference in N400 amplitude between event-related and event-unrelated anomalous words in Metusalem et al.’s data.

In the present experiment, Metusalem et al.’s stimuli were used, only now with target words presented to the right or left visual field (RVF/LVF) only. This visual half-field presentation provides a processing advantage to the hemisphere contralateral to the visual field of presentation, accentuating processing by that contralateral hemisphere in the scalp-recorded ERP waveforms. The results show that reduction in N400 amplitude for event-related vs event-unrelated anomalies is found only with LVF/RH presentation. This result is discussed with respect to theories of hemispheric specialization in language processing.

2014-11-18

Names, Adjectives, & Gender: The Social Evolution of Linguistic Systems

Melody Dye Department of Cognitive Science, Indiana University

+ more

According to a common metaphor, language is a vehicle for encoding our thoughts and decoding those of others, or of ‘packing’ and ‘unpacking’ the stuff of thought into linguistic form. While this can be a useful methodological framing, it has run aground against a number of serious empirical and epistemological challenges. In this talk, I will discuss how information theory can offer a reformulation of the traditional ‘code model’ of communication. On this view, meaning does not reside in words or sentences, but in the exchange – and progressive alignment – of speakers with more (or less) similar codes. Such a perspective emphasizes the importance of uncertainty, prediction, and learning in communication, casting human languages as systems of social exchange that have evolved both to optimize the flow of information between speakers, and to balance the twin demands of comprehension and production. In support of this framing, I will report on a pair of cross-linguistic projects: one, contrasting the evolution of naming systems in the East and in the West, and the other, comparing the functional role of grammatical gender with that of prenominal adjectives across two Germanic languages. This work suggests a principled way of beginning to piece apart those evolutionary pressures on language that are universal, from those that bound to specific social environments.

2014-11-04

In constrained contexts, preschoolers’ recognition of accented words is excellent

Sarah Creel Department of Cognitive Science, UCSD

+ more

Do unfamiliar accents impair young children’s language comprehension? Infants detect familiarized word-forms heard in accented speech by 13 months, yet 4-year-olds have difficulty repeating isolated words in unfamiliar accents. The current work attempts to integrate these disparate findings by testing accented word recognition with or without semantic constraint, visual-contextual constraint, and rapid perceptual accent adaptation.

Monolingual English-learning preschoolers (n=32) completed an eye-tracked word recognition test. On each trial, four pictures appeared; 500 milliseconds later, a sentence—sensical or nonsensical, American-accented or Spanish-accented—was spoken. Children attempted to select mentioned pictures as eye movements were tracked. Word-recognition accuracy and visual fixations were higher for sensical than nonsensical sentences. However, accuracy did not differ between accents, and fixations differed only marginally. Thus, preschool-aged children adeptly recognized accented words with semantic and visual-contextual constraint. A second experiment showed lower recognition of Spanish-accented than American-accented words when words are excised from sentences. Throughout, children showed no tendency toward mutual exclusivity responses (selecting a novel object when hearing an accented word), unlike earlier studies of familiar-accented mispronunciations (Creel, 2012). Ongoing work assesses children's accuracy in repeating words (no visual-contextual constraints). Overall, results suggest that decontextualized accented speech is likely to be more difficult for young children to process than is contextually-constrained speech.

2014-10-28

Context in pragmatic inference

Judith Degen Department of Psychology, Stanford University

+ more

In the face of underspecified utterances, listeners routinely and without much apparent effort make the right kinds of pragmatic inferences about a speaker’s intended meaning. I will present a series of studies investigating the processing of one type of inference -- scalar implicature -- as a way of addressing how listeners perform this remarkable feat. In particular, I will explore the role of context in the processing of scalar implicatures from “some” to “not all”. Contrary to the widely held assumption that scalar implicatures are highly regularized, frequent, and relatively context-independent, I will argue that they are in fact relatively infrequent and highly context-dependent; both the robustness and the speed with which scalar implicatures from “some” to “not all” are computed are modulated by the probabilistic support that the implicature receives from multiple contextual cues. I will present evidence that scalar implicatures are especially sensitive to the naturalness or expectedness of both scalar and non-scalar alternative utterances the speaker could have produced, but didn’t. In this context I will present a novel contextualist account of scalar implicature processing that has roots in both constraint-based and information-theoretic accounts of language processing and that provides a unified explanation for a) the varying robustness of scalar implicatures across different contexts, b) the varying speed of scalar implicatures across different contexts, and c) the speed and efficiency of communication.

2014-10-21

Social robots: things or agents?

Morana Alac Department of Communication

+ more

In our digital, post-analog times, questions of where the world stops and the screen starts, and how to discern the boundary between agency and things are common. A source of such a conundrum are social robots. For their designers, social robots are fascinating as they combine aspects of machines with those of living creatures: they offer the opportunity to ask how matter can be orchestrated to generate impressions of life and sociality. Social science literature on social robots, on the other hand, has mostly engaged the social/agential (and cultural) character of these technologies, leaving the material aspects to their designers. This talk proposes a social science account that is sensitive to both – the objecthood and agency of social robots. It does so by focusing on actual engagements between robots and those who encounter them as a part of everyday practices in social robotics. I pay specific attention to spatial arrangements, body orientation, gaze, use of gesture and tactile exploration in those interactions. In other words, I ask how is the boundary between agency and things practically resolved through a multimodal and multisensory coordinated engagement in the world as we live it.

2014-10-14

Conceptual elaboration facilitates retrieval in sentence processing

Melissa Troyer Department of Cognitive Science, UCSD

+ more

Sentence comprehension involves connecting current linguistic input with existing knowledge about the world. We propose that this process is facilitated (a) when more information is known about referents in the sentence and (b) when comprehenders have greater world knowledge. In single sentences, items with more features can exhibit facilitated retrieval (Hofmeister, 2011). Here, we investigate retrieval when such information is presented over a discourse, rather than within a single sentence. Participants read texts introducing two referents (e.g., two senators), one of which was described in greater detail than the other (e.g., ‘The Democrat had voted for one of the senators, and the Republican had voted for the other, a man from Ohio who was running for president’). The final sentence (e.g., ‘The senator who the {Republican / Democrat} had voted for…’) contained a relative clause picking out either the many-cue referent (with ‘Republican’) or the one-cue referent (with ‘Democrat’). We predicted facilitated retrieval for the many-cue condition at the verb region (‘had voted for’), where ‘the senator’ must be understood as the object of the verb. Participants also completed the Author and Magazine Recognition Tests (ART/MRT; Stanovich & West, 1989), a measure of print experience and a proxy for world knowledge. Since high scorers may have greater experience accessing knowledge in semantic memory, we predicted that they might drive retrieval effects. Indeed, across two experiments, high scorers on the ART/MRT exhibited the predicted effect. Results are consistent with a framework in which conceptual and not just linguistic information directly impacts word retrieval and thereby sentence processing. At least in individuals with greater print exposure, perhaps indicative of greater knowledge, elaboration of conceptual information encoded throughout a discourse seems to facilitate sentence processing.

2014-06-03

Elicitation of early negativity (EN) in sentence-processing contexts depends on attentional efficiency

Chris Barkley Department of Linguistics, UCSD

+ more

This study investigates the language-attention interface using early negativity (EN), elicited between 100-300 msec in sentence processing contexts (and commonly referred to as the “eLAN”), as the dependent measure. EN was first elicited in response to “word-category violations” (WCVs) of the type The man admired {a sketch/*Don’s} OF sketch the landscape (Neville et al., 1991). These responses were initially interpreted as an index of first-pass structure-building operations (Friderici 2002) but later reinterpreted as an index of low-level sensory form-based processing (Dikker, 2009). We hypothesized instead that EN is ontologically an attentional response, and therefore that the physical parameters of the EN should co-vary with measures of attentional efficiency. Under this view, the executive attention system is engaged as subjects monitor for ungrammaticality, orienting them to unexpected, task-relevant stimuli, resulting in selective attention to the WCV, enhanced sensory processing, and increases in the amplitude of the domain-general N100 response.

Here I report preliminary results from a sentence processing experiment including sentences with WCVs, filler sentences containing violations intended to elicit standard LAN, N400, and P600 effects, and an attention task designed to assess the efficiency of an individual’s alerting, orienting, and executive attention networks. Results of an attentional efficiency-based median split analysis of 36 subjects showed that the EN was elicited in only two groups: the low efficiency orienting and executive groups. In contrast, significant LAN, N400, and P600 were elicited in all groups and only differed minimally in their physical parameters.

These data suggest that EN effects may be mere attentional modulations of the N100. We hypothesize that while comprehenders with high-efficiency attentional systems possess adequate resources to accommodate WCVs, low-efficiency comprehenders must engage additional selective attentional resources in order to process WCVs, leading to enhancements of N100 amplitude. This finding highlights the importance of investigating cognitive systems beyond working memory in sentence processing contexts.

2014-05-27

Computational Models for the Acquisition of Phonological Constraints

Gabriel R Doyle Department of Linguistics, UCSD

+ more

Phonology, whether approached from a rule-based or Optimality Theory viewpoint, relies on a set of rules or constraints that shape the sound patterns of a language. But where does this set come from? The most common, sometimes unstated, solution is to treat the set as innate and language-universal. This universality has some explanatory benefits, but it is a strong assumption, and one influenced largely by a lack of viable methods for learning constraints.

We propose two computational models for markedness constraint acquisition in an Optimality Theory framework. The first uses minimal phonological structure to learn a set of constraint violations that can be used to identify probable constraints. The second uses a similar learning structure but includes a basic grammar for constraints to jointly learn violations and the structure of these constraints. These methods, tested on Wolof vowel harmony and English plurals, learn systems of constraints that explain observed data equally well as the constraints in a standard phonological analysis, with a violation structure that largely corresponds with the standard constraints. These results suggest that phonological constraints are theoretically learnable, making phonological acquisition behavior the critical data point for deciding between theories with innate and learned constraints. This is joint work with Klinton Bicknell and Roger Levy.

2014-05-13

How The Eyes Recognize Language: An Investigation in Sign Language Acquisition and Adult Language Processing

Rain Bosworth and So-One Hwang Department of Psychology and Center for Research in Language, UCSD

+ more

Newborn infants demonstrate an early bias for language signals, which contributes to their ability to acquire any of the world’s diverse range of spoken languages. One study found that young infants have an attentional bias for viewing sign language narratives over pantomimes (Krentz & Corina 2008). We recently replicated this finding with single signs and body grooming/action gestures. Thus, the human capacity to recognize linguistic input from birth may not arise simply from sensitivity to the acoustic properties of speech but to more general patterns that can be transmitted in either the spoken or signed modalities.

In this talk, we will describe a series of new experiments designed to investigate the following questions: How important is the temporally-encoded patterning of sign languages for 1) language recognition and acquisition in children, and 2) language processing among signing and non-signing adults? What are the gaze profiles of young infants exposed to sign language at home, and how does it compare with skilled adult signers? To study these questions, we created videos of natural signing, using both single-sign and narrative recordings, and novel non-linguistic stimuli by time-reversing these videos. We measured percent looking time and gaze trajectories for these “natural” and “reversed” stimuli, using a Tobii eyetracker because of its utility in testing across ages -- in infants, children, and adults. In addition to the eyetracking measures, we also obtained behavioral measures of intelligibility to better understand the impact of natural and unnatural temporal dynamics on language processing in the signed modality. Findings from this work may provide a useful tool in detecting early proficiency in language processing during infancy.

2014-05-06

Are automatic conceptual cores the gold standard of semantic processing? The context-dependence of spatial meaning in grounded congruency effects

Larry Barsalou Department of Psychology, Emory University

+ more

According to grounded cognition, words whose semantics contain sensory-motor features activate sensory-motor simulations, which, in turn, interact with spatial responses to produce grounded congruency effects (e.g., processing the spatial feature of up for sky should be faster for up vs. down responses). Growing evidence shows these congruency effects do not always occur, suggesting instead that the grounded features in a word?s meaning do not become active automatically across contexts. Researchers sometimes use this as evidence that concepts are not grounded, further concluding that grounded information is peripheral to the amodal cores of concepts. We first review broad evidence that words do not have conceptual cores, and that even the most salient features in a word?s meaning are not activated automatically. Then, in three experiments, we provide further evidence that grounded congruency effects rely dynamically on context, with the central grounded features in a concept becoming active only when the current context makes them salient. Even when grounded features are central to a word?s meaning, their activation depends on task conditions.

2014-04-29

Short-term memory for ASL fingerspelling and print

Zed Sevcikova School of speech, language, and hearing science, SDSU

+ more

This study investigates how printed and fingerspelled words are coded in short-term memory. Hearing readers recode print into a phonological code for short-term memory (STM), but evidence for phonological recoding in deaf readers has been mixed. It is unclear to what extent reading abilities or phonological awareness relate to the use of a phonological code in STM in deaf readers. In sign languages, orthography can be indirectly represented through fingerspelling. However, little is known about if fingerspelling is used as an additional code to store and rehearse printed words, or whether fingerspelled words are recoded into English. In this study, we investigated whether phonological and manual similarity affects word list recall when to-be-recalled items are presented as print or fingerspelling. 20 deaf ASL signers performed an immediate serial recall task with print stimuli and another 20 deaf ASL signers with fingerspelled stimuli. 20 hearing non-signers were included as a control group for printed words. All participants also completed a range of standardized reading and language assessments, including measures of spelling recognition, phonological awareness and reading comprehension. The stimuli were controlled for phonological similarity and for manual similarity. Deaf and hearing groups both displayed a phonological similarity effect for printed words. Interestingly, deaf readers also showed a phonological similarity effect for fingerspelling. We did not find evidence for a manual similarity effect for either printed words or fingerspelled words. These results suggest that in short-term rehearsal, ASL fingerspelling is quickly recoded into an English phonological code. I will further discuss these findings in the context of individual differences in phonological awareness, reading and language skills.

2014-04-22

Remediation of abnormal visual motion processing significantly improves attention, reading fluency, and working memory in dyslexia

Teri Lawton Department of Computer Science and Engineering, UCSD

+ more

Temporal processing deficits resulting from sluggish magnocellular pathways in dorsal stream cortical areas have been shown to be a key factor limiting reading performance in dyslexics. To investigate the efficacy of reading interventions designed to improve temporal processing speed, we performed a randomized trial on 75 dyslexic second graders in six public elementary schools, comparing interventions targeting the temporal dynamics of either the auditory and/or visual pathways with the school’s regular reading intervention (control group). Standardized tests of reading fluency, attention, and working memory were used to evaluate improvements in cognitive function using ANCOVAs. Most dyslexics in this study had abnormal visual motion processing, having elevated contrast thresholds for movement-discrimination on a stationary, textured background. Visual movement-discrimination training to remediate abnormal motion processing significantly improved reading fluency (both speed and comprehension), attention, phonological processing, and auditory working memory, whereas auditory training to improve phonological processing did not significantly improve these skills. The significant improvements in phonological processing, and both sequential and nonsequential auditory working memory demonstrate that visual movement-discrimination training improves auditory skills even though it is training visual motion discrimination, suggesting that training early in the visual dorsal stream improved higher levels of processing in the dorsal stream, where convergence of both auditory and visual inputs in the parietal cortex have been found, suggesting that improving the timing and sensitivity of movement discrimination improves endogenous attention networks. These results implicate sluggish magnocellular pathways in dyslexia, and argue against the assumption that reading deficiencies in dyslexia are only phonologically-based.

2014-04-15

Speed reading? You've gotta be Spritzin' me

Liz Schotter Department of Psychology, UCSD

+ more

Recently, web developers have spurred excitement around the prospect of achieving speed reading with apps that use RSVP (rapid serial visual presentation) to present words briefly and sequentially. They claim that reading in this way not only make the process faster, but also improves comprehension. In this talk, I will describe some findings from the field of reading research that contradict these claims. In particular, I will describe studies that suggest that the brain tightly controls the sequence and duration of access to information from words in sentences; therefore any piece of technology that takes away that control from the reader will impair the reading process to some degree.

2014-04-01

Comprehension priming as rational expectation for repetition: Evidence from syntactic processing

Mark Myslin Department of Linguistics, UCSD

+ more

Why do comprehenders process repeated stimuli more rapidly than novel stimuli? The most influential hypotheses of these priming effects appeal to architectural constraints, stating that the processing of a stimulus leaves behind residual activation or strengthens its learned representation in memory. We propose an adaptive explanation: priming is a con- sequence of expectation for repetition due to rational adaptation to the environment. If occurrences of a stimulus cluster in time, given one occurrence it is rational to expect a second occurrence closely following. We test this account in the domain of structural priming in syntax, making use of the sentential complement-direct object (SC-DO) ambiguity. We first show that sentences containing SC continuations cluster in natural language, motivating an expectation for repetition of this structure. Second, we show that comprehenders are indeed sensitive to the syntactic clustering properties of their current environment. In a between- groups self-paced reading study, we find that participants who are exposed to clusters of SC sentences subsequently process repetitions of SC structure more rapidly than participants who are exposed to the same number of SCs spaced in time, and attribute the difference to the learned degree of expectation for repetition. We model this behavior through Bayesian belief update, showing that (the optimal degree of) sensitivity to clustering properties of syntactic structures is indeed learnable through experience. These results support an account in which comprehension priming effects are the result of rational expectation for repetition based on adaptation to the linguistic environment.

2014-03-04

Hearing a Who: Preschoolers and Adults Process Language Talker-Contingently (Preview of an invited talk at CUNY 2014)

Sarah Creel UC San Diego

+ more

Listeners process sentences, but they also process people. Research in the past few decades indicate that a talker’s identity or (perceived) social group influences language processing at a variety of levels: phonological (e.g. Niedzielski, 1999), lexical (e.g. Goldinger, 1996), syntactic (Kamide, 2012), and discourse (Horton & Gerrig, 2005) levels.

Do these instances of talker specificity reflect small-scale flexibility of highly abstract language knowledge, or do they represent a crucial facet of language processing? I argue the latter. At least two critical elements of language processing are profoundly affected by talker identity. First is phonemic category extraction: listeners who are new to a language have difficulty generalizing speech sound and word recognition to new voices, and are aided by voice variability during learning (e.g. L1: Houston & Jusczyk, 2000; L2: Lively et al., 1993). Second are higher-level expectation effects in language processing, at the level of discourse processing and “talker-semantic” encoding. I will touch briefly on issues of phonemic category extraction and word encoding, but I will primarily discuss discourse and semantic aspects of talker identity, including my own research on the development of talker processing.

A variety of studies suggest that language is a powerful cue to social groups (Eckert, 2008). Knowing someone’s social group, or even their particular identity, influences on-line sentence processing. Adults in an ERP paradigm who heard identical sentences spoken either by a congruous or incongruous talker (e.g. adult vs. child saying “I want to drink the wine”) showed a larger N400 semantic mismatch negativity to the target word when the incongruous talker spoke the sentence (Van Berkum et al., 2008). In my own research, I have shown that preschool-aged children direct eye movements preferentially to shapes of the talker’s favorite color when that individual is talking (“Show me the circle”; Creel, 2012). In collaborative work (Borovsky & Creel, in press), 3-10-year olds, as well as adults, activated long-term knowledge about different individuals (e.g. pirates vs. princesses) based who spoke the sentence. Specifically, participants hearing a pirate say “I want to hold the sword” directed eye movements preferentially to a sword picture prior to word onset, despite the presence of other pirate-related (a ship) and holdable (a wand) pictures. This suggests that children can use voice information to identify individuals and activate knowledge that constrains sentence processing in real time. Finally, a new study in my lab suggests that preschool-aged children concurrently encode novel word-referent mappings and novel person-referent mappings.

The studies reviewed here suggest that listeners’ language apprehension is affected in real time by inferences of who is speaking. This is much more consistent with an interactive view of language processing than a modular view. Even quite young children appear to condition or contextualize their language input based upon who is saying it, suggesting that language acquisition itself is talker-contingent.

2014-02-25

Studying the role of iconicity in the cultural evolution of communicative signals

Tessa Verhoef UC San Diego

+ more

When describing the unique combination of design features that make human languages different from other communication systems, Hockett (1960) listed 'arbitrariness' among them. However, modern knowledge about languages suggests that form-meaning mappings are less arbitrary than were previously assumed (Perniss et al. 2010). Especially sign languages, but also certain spoken languages (Dingemanse, 2012), are actually quite rich in iconic or motivated signals, in which there is a perceived resemblance between form and meaning. I will present two experiments to explore how iconic forms may emerge in a language, how arbitrariness or iconicity of forms relates to the affordances of the medium of communication, and how iconic forms interact and possibly compete with combinatorial sublexical structure. In these experiments, artificial languages with whistled words for novel objects were culturally transmitted in the laboratory. In the first experiment, participants learned an artificially generated whistled language and reproduced the sounds with the use of a slide whistle. Their reproductions were used as input for the next participant. Participants were assigned to two different conditions: one in which the use of iconic form-meaning mappings was possible, and one in which the use of iconic mappings was experimentally made impossible. The second experiment involved an iterated communication game. Pairs of participants were asked to communicate about a set of meanings using whistled signals. The meaning space was designed so that some meanings could be more easily paired with an iconic form while others were more difficult to map directly onto the medium of communication. Findings from both experiments suggest that iconic strategies can emerge in artificial whistled languages, but that iconicity can become degraded as well when forms change to become more consistent with emerging sound patterns. Iconicity seems more likely to persist and contribute to successful communication if it serves as a means for establishing systematic patterns.

2014-01-28

Parallel language activation and inhibitory control in bimodal bilinguals

Marcel Giezen San Diego State University

+ more

Bilinguals non-selectively access word candidates from both languages during auditory word recognition. To manage such cross-linguistic competition, they appear to rely on cognitive inhibition skills. For instance, two recent studies with spoken language bilinguals found that individual differences in nonlinguistic conflict resolution abilities predicted language co-activation patterns. It has been suggested that the association between parallel language activation and performance on certain inhibitory control tasks reflects underlying similarities in cognitive mechanisms, more specifically, the processing of perceptual conflict. In the present study, we put this idea to the test by investigating the relationship between language co-activation and inhibitory control for bilinguals with two languages that do not perceptually compete, namely bimodal bilinguals.

Parallel language activation was examined with the visual world eye-tracking paradigm. ASL-English bilinguals’ eye movements were monitored as they listened to English words (e.g., “paper”) while looking at displays with four pictures including the target picture, a cross-linguistic phonological competitor (e.g., cheese; the ASL signs for cheese and paper only differ in their movement), and two unrelated pictures. Results showed that competitor activation during the early stages of word recognition correlated significantly with inhibition performance on a non-linguistic spatial Stroop task. Bilinguals with a smaller Stroop effect (indexing more efficient inhibition) exhibited fewer looks to ASL competitors.

Our results indicate that bimodal bilinguals recruit domain-general inhibitory control mechanisms to resolve cross-linguistic competition. Importantly, because spoken and sign languages do not have a shared phonology, this suggests that the role of inhibitory control in bilingual language comprehension is not limited to resolving perceptual competition at the phonological level, but also cross-linguistic competition that originates at the lexical and/or conceptual level. These findings will be discussed within current frameworks of bilingual word recognition and in light of the ongoing debate on bilingual advantages in cognitive control.

2014-01-21

Fluid Construction Grammar

Luc Steels ICREA, Institute for Evolutionary Biology (UPF-CSIC), Barcelona; VUB AI Lab Brussels

+ more

Fluid Construction Grammar (FCG) is an operational computational formalism trying to capture key insights from construction grammar, cognitive linguistics and embodiment semantics. The central unit of description is a construction with a semantic and a syntactic pole. Constructions formulate constraints at any level of language (phonetics, phonology, morphology, syntax, semantics and pragmatics) and are applied using unification-style match and merge operations. FCG uses a semantics which is procedural and grounded in sensori-motor states. Flexible language processing and learning is implemented using a meta-level in which diagnostics detect anomalies or gaps and repair strategies try to cope with them, by ignoring ungrammaticalities or expanding the language system. FCG has been used chiefly as a research tool for investigating how grounded language can emerge in populations of robots.

This talk presents an overview of FCG and is illustrated with a live demo.

+ Steels, L. (2013) Fluid Construction Grammar. In Hoffmann, T. and G. Trousdale (ed.) (2012) Handbook of Construction Grammar. Oxford University Press, Oxford.

+ Steels, L. (ed.) (2011) Design Patterns in Fluid Construction Grammar. John Benjamins Pub. Amsterdam.

2014-01-14

Olfactory language across cultures

Asifa Majid Radboud University Nijmegen

+ more

Plato proposed: “the varieties of smell have no name, and they have not many, or definite and simple kinds; but they are distinguished only as painful and pleasant”. This view pervades contemporary thought, and experimental data to date provides ample support. This has lead researchers to propose there must be a biological limitation for our inability to name smells. However, recent studies with two hunter-gatherer communities in the Malay Peninsula challenges this received wisdom. Jahai speakers, for example, were able to name odors with the same ease with which the named colors, unlike a matched English sample of participants who struggled to name familiar western odors. Moreover, Jahai speakers use a set of dedicated smell verbs to describe different smell qualities. Nothing comparable exists in today's conversational English. The Jahai are not the only group with such a smell lexicon. A related language, Maniq, also shows a sizeable smell lexicon, although the precise terms differ from those found in Jahai. The Maniq smell lexicon shows a coherent internal structure organised around two dimensions, pleasantness and dangerousness. Together these languages show that the poor codability of odors is not a necessary product of how the brain is wired, but rather a matter of cultural preoccupation.

2014-01-07

Effects of literacy on children’s productions of complex sentences

Jessica L. Montag Indiana University

+ more

When people speak, they have many choices about how to say what they want to say. This largely unconscious process – of choosing words and sentence structures – is poorly understood. I will argue that we can begin to understand these production choices by understanding what is easy or difficult for speakers to produce. One aspect of this difficulty is the frequency with which a speaker has encountered or produced an utterance in the past. In my talk, I will be discussing a set of corpus analyses and production experiments with children and adults. I investigated how amount of language experience and emerging literacy affect production choices. These studies show how children gradually learn to identify alternative language forms from their linguistic environment, how the linguistic environment changes over time as children grow, and how children’s control over complex sentence structures continues to develop well after early stages of language learning.

2013-12-03

Impact of language modality for gesturing and learning

So-One Hwang UC San Diego

+ more

Our research team has been investigating whether gesture can be reliably distinguished from language when they are expressed in the same modality by deaf signers, and whether gesture plays a role in problem-solving tasks as it does for young hearing children using co-speech gesture (Goldin-Meadow et al. 2012). Building upon the finding that gesture can be used to predict readiness to learn math equivalence among 9-12 year old deaf students, here we tested 5-8 year olds (n=33) on conservation knowledge. Piagetian conservation tasks involve comparisons of objects that are transformed in shape or configuration but not quantity. We asked the children to make judgments about objects’ quantities and asked them to explain their answers. Because young children often describe the appearance of objects in their explanations, we faced methodological challenges. In ASL, shapes and configurations are typically described using polycomponential forms called classifiers. Classifiers are described in the sign language research literature as being distinct from lexical signs but it is not clear whether they too are lexical, or have properties of gesture. Our results suggest 1) that lexical signs are like words in their ability to refer to abstract properties, and 2) that classifiers can be used flexibly as either lexical forms or gestural forms. The findings suggest that gesture can be beneficial in problem-solving contexts when it supplements rather than substitutes core linguistic formats for thinking.

2013-11-26

Tarzan Jane Understand? A least-joint-effort account of constituent order change

Matt Hall University of Connecticut

+ more

All natural languages evolve devices to communicate who did what to whom. Elicited pantomime provides one model for studying this process, by providing a window into how humans (hearing non-signers) behave in a natural communicative modality (silent gesture) without established conventions from a grammar. In particular, we use this system to understand the cognitive pressures that might lead languages to shift from Subject-Object-Verb (SOV) toward Subject-Verb-Object (SVO): a pattern that is widely attested over both long and short timescales.

Previous research on production finds consistent preferences *for* SOV in "canonical" events (e.g. a woman pushing a box) but *against* SOV in "reversible" events (e.g. a woman pushing a man). Comprehenders, meanwhile, seem to have no objection to SOV for either type of event, suggesting that ambiguity-based accounts of the production data are unlikely. However, both production and comprehension have previously been tested in isolation. Here we ask whether SVO might emerge -for both reversible and canonical events- as a result of dynamic interaction between producers and comprehenders engaged in real-time communication.

Experiment 1 asked participants to describe both canonical and reversible events in gesture, in two conditions: interactive and solitary. In the interactive condition, two naive subjects took turns describing scenes to one another. In the solitary condition, one participant described the same scenes to a camera. In addition to replicating previous findings, results showed that SVO did increase more in the interactive condition than in the solitary condition- but only among reversible events. SVO also increased among the canonical events, but to the same extent in both interactive and solitary conditions. Experiment 2 ruled out the possibility that the SVO rise among canonical events simply reflects English recoding, and instead demonstrates that it depends on the presence of reversible events.

So why do languages shift toward SVO? The need to communicate about reversible events seems to be part of the answer, but the fact that canonical events also shift toward SVO may be due to production-internal mechanisms. Identifying these mechanisms is a target for future research.

2013-11-19

A different approach to language evolution

Massimo Piattelli-Palmarini University of Arizona

+ more

For many authors, it is literally unthinkable that language as we know it cannot have evolved under the pressure of natural selection for communication, better thinking and social cohesion. The first model I will examine, showing its radical inadequacy, is, therefore, the adaptationist one. In our book, Jerry Fodor and I have tried to explain at some length (Fodor and Piattelli-Palmarini, "What Darwin Got Wrong" 2011) what is wrong quite generally with neo-Darwinian adaptationist explanations. But, even admitting, for the sake of the argument, that such explanations do apply to biological traits in general, I will concentrate on the specific defects of such explanations in the case of language. Syntax has not been shaped, as I will show, by communication or social cohesion. A second model I will criticize is one that conceptualizes language as an application of general cognitive traits, innate generic predispositions to categorize, extract statistical regularities from a variety of inputs, make inferences, learn from experience and assimilate the cultural norms of the surrounding community. The third model is based on general conditions of learnability and progressive simplification of the mental computations attributed to our mastering of language. Computer models of iterative learning, of the stepwise convergence of neural networks on simple solutions, and evolutionary considerations postulating the progressive shaping of language towards better learnability, will be examined and their implausibility explained. Finally, I will present a quite different model, still under development. It appears to be very promising and innovative and capable of re-configuring the entire issue of language evolution. Very recent data from several laboratories and several fields bring further implicit endorsement to this model. In essence, I will offer reasons to conclude that optimization constraints and rules of strict locality allow for some variability under the effects of external inputs, but this range of variation is quite limited, and concentrated in a relatively small fixed numbers of points, in conformity with what the linguistic model of Principles and Parameters suggested already 25 years ago.

2013-11-12

Knowing too much and trying too hard: why adults struggle to learn certain aspects of language

Amy Finn Massachusetts Institute of Technology

+ more

Adults are worse than children when it comes to learning certain aspects of language. Why is this the case when adults are better than children on most other measures of learning, including almost every measure of executive function? While many factors contribute to this age-related learning difference, I will present work that shows that (1) linguistic knowledge, (2) mature, language-specific neural networks, and (3) mature cognitive function all contribute to these age-related differences in language learning outcomes.

2013-11-05

The changing structure of everyday experience in the first two years of life

Caitlin Fausey Indiana University

+ more

Human experience may be construed as a stream - in time - of words and co-occurring visual events. How do the statistical and temporal properties of this stream engage learning mechanisms and potentially tune the developing system? In this talk, I will describe ongoing work designed to characterize 1) the changing rhythm of daily activity, 2) the changing visual availability of important social stimuli like faces and hands, and 3) the changing distributions of object instances with the same name. This ongoing research suggests that the statistical structure of the learning environment is dynamic and gated by young children's developmental level. The conjecture is that structure in everyday activities - at multiple timescales, and changing over the course of development - may drive change in the cognitive system.

2013-10-29

Speak for Yourself: Simultaneous Learning of Words and Talkers’ Preferences

Sarah Creel

+ more

Language presents a complex learning problem: children must learn many word-meaning mappings, as well as abundant contextual information about words’ referents. Can children learn word-referent mappings while also learning context (individuals’ preferences for referents)? Three experiments (n=32 3-5-year-olds each) explored children’s ability to map similar-sounding novel words to referents while also learning talkers’ preferred referents. Both accuracy (assessing word learning) and moment-by-moment visual fixations (assessing talker preference knowledge) were recorded. Words were learned accurately throughout. When liker information (“I want” or “Anna wants”) occurred early in the sentence, children rapidly looked to the liker’s favorite picture. However, when liker information occurred after the target word, children used voice information, even if the speaker ended up naming the other character (“…for Anna”). When liker and talker were dissociated during learning (each talker labeled the other’s favorite), children showed no looking preferences. Results suggest sophisticated encoding of multiple cues during language development.

2013-10-22

Signing in the Visual World: Effects of early experience on real-time processing of ASL signs

Amy Lieberman

+ more

Signed languages present a unique challenge for studying real-time lexical recognition, because the visual modality of sign requires the signer to interpret the linguistic and referential context simultaneously. Deaf individuals also vary widely in the timing and quality of initial language exposure. I will present a series of studies investigating real-time lexical recognition via eye-tracking in adult signers who varied in their age of initial exposure to sign language. Using a novel adaptation of the visual world paradigm, we measured the time course and accuracy of lexical recognition of ASL signs, and the effect of phonological and semantic competition on the time course of sign processing. I will discuss implications with regard to the impact of early experience on later linguistic processing skills.

2013-10-15

Different strokes: gesture phrases in Z, a first generation family homesign.

John Haviland

+ more

In order not to prejudge the constituents and categories of "Z," an emerging sign language isolate in a single extended family including 3 deaf siblings in highland Chiapas, Mexico, where the surrounding spoken language is Tzotzil (Mayan), I try to apply in rigorous formal fashion a model of phrase structure derived from studies of "speaker's gestures" that accompany spoken language. I try to evaluate the virtues and potential vices of such a methodologically austere approach as applied to spontaneous, natural conversation in Z.

2013-10-08

What do you know and when do you know it?

Ben Amsel

+ more

How is knowledge organized in memory? How do we access this knowledge? How quickly are different kinds of knowledge available following visual word perception? I'll present a series of experiments designed to advance our understanding of these questions. I'll show that the timing of semantic access varies substantially depending on the type of knowledge to be accessed, and that some kinds of information are accessed very rapidly. I'll demonstrate that different kinds of knowledge may be recruited flexibly to make specific decisions. I'll also present strong evidence that the neural processing systems subserving visual perception are directly involved in accessing knowledge about an object’s typical color. Taken together, these findings are most consistent with a flexible, fast, and at least partially grounded semantic memory system in the human brain.

2013-10-01

Duck, Duck, ... Mallard: Advance Word Planning Facilitates Production of Dispreferred Alternatives

Dan Kleinman

+ more

Consider the spoken sentence “Dan fell asleep yesterday on the lab couch.” The speaker likely planned most of its semantic content prior to speech onset (e.g., deciding that the last word would refer to the piece of furniture in question). However, due to the attention-demanding nature of word selection, the speaker may not have selected the final word (“couch”, instead of the equally acceptable “sofa”) until shortly before it was uttered. This difference in automaticity means that, relative to a word produced in isolation, words produced in connected speech can be planned for longer prior to selection. How does this additional pre-selection planning affect the words that speakers choose to say?
I will present two experiments that tested the hypothesis that this extra time increases the accessibility of dispreferred responses. In each experiment, 100 subjects named critical pictures with multiple acceptable names (e.g., “couch”, used by 80% of subjects in a norming study, or “sofa”, used by 20%) under conditions that manipulated how long subjects could plan prior to speaking. In Experiment 1, pictures presented in a dual-task context elicited more dispreferred names (such as “sofa”) than pictures presented in a single-task context. In Experiment 2, pictures named at the end of a sentence (“The tent is above the sofa”) elicited more dispreferred names (at fast response latencies) than pictures named at the beginning of a sentence (“The couch is above the tent”).
These results indicate that when word selection is delayed, low-frequency responses have more time to become accessible and thus are produced more often. Because attentional bottlenecks in language production effectively delay the selection of most words during natural speech, the words we choose are influenced by our ability to plan them in advance.

2013-06-04

Let’s take a look at light verbs: Relationships between syntax, semantics, and event conceptualization

Eva Wittenberg Institut für Germanistik, Potsdam University

+ more

Light verb constructions, such as "Julius is giving Olivia a kiss", create a mismatch at the syntax-semantics interface. Typically, each argument in a sentence corresponds to one semantic role, such as in "Julius gave Olivia a book", where Julius is the Source, Olivia the Goal, and the book the Theme. However, a light verb construction such as “Julius gave Olivia a kiss” with three arguments describes the same event as the transitive “Julius kissed Olivia” with two arguments: Julius is the Agent, and Olivia the Patient. This leads to several questions: First, how are light verb constructions such as "giving a kiss" processed differently from sentences such as "giving a book" ? Second, at which structural level of representation would we find sources of this difference? Third, what is the effect of using a light verb construction such as "giving a kiss" as opposed to "kissing" on the event representation created in a listener? I will present data from an ERP study, an eye-tracking study, and several behavioral studies to answer these questions.

2013-05-28

Accessing Cross Language Categories in Learning a Third Language

Page Piccinini Department of Linguistics, UCSD

+ more

Current theories differ on how bilinguals organize their two languages, including their sound systems. The debate centers on whether bilinguals have constant access to both systems (Green, 1998; c.f. Johnson, 1997; Pierrehumbert, 2002) or to one system at a time (Cutler et al., 1992; Macnamara & Kushnir, 1971). This study examines these theories by testing the ability of early Spanish-English bilinguals to access distinctions within the voice onset time (VOT) continuum when learning a third language that uses VOT categories from both Spanish and English. Participants were tested on Eastern Armenian that has a three-way VOT contrast: negative, short-lag and long-lag VOT (cf. English which largely distinguishes short-lag from long-lag VOT and Spanish which contrasts negative and short-lag VOT). Participants were tested first with a production task followed by either an AX discrimination task or an ABX discrimination task. Of those who participated in the AX task half of participants received instructions in English and half received instructions in Spanish; of those who participated in the ABX task all received instructions in Spanish. Language dominance was also assessed via a questionnaire to see how being dominant in one language over another could affect production and perception of the three-way contrast. For the production experiment there was a significant difference in VOT durations between all three VOT categories. However there was a significant interaction with language dominance, whereby only balanced bilinguals could reliably produce the negative VOT category as compared to English dominant bilinguals. There was no effect of language of instruction. For the AX discrimination task participants were significantly above chance for discriminating negative VOT from long-lag VOT, significantly below chance at discriminating negative VOT from short-lag VOT, and at chance at discriminating short-lag VOT from long-lag VOT. There was no significant effect of either language of instruction or language dominance. Preliminary results from the ABX discrimination task suggest bilinguals can accurately discriminate all three contrasts. There was a marginally significant effect of language dominance with balanced bilinguals doing better at negative VOT compared to short-lag VOT than English dominant bilinguals. These results suggest that in production early Spanish-English bilinguals can reliably produce the three-way contrast, but only if they are balanced in both languages. In perception early Spanish-English bilinguals are able to discriminate the three-way contrast, as shown by the ABX discrimination task, especially if they are more balanced. However early Spanish-English bilinguals, both balanced and English dominant, have a preference for languages to only have a two-way contrast, as shown by the AX discrimination task. Overall these results support a theory whereby bilinguals have access to sounds from both of their languages at once, particularly if they are balanced bilinguals.

2013-05-21

The (un)automaticity of structural alignment

Iva Ivanova UCSD Psychology Department

+ more

Interlocutors in a dialogue often mirror each other’s linguistic choices at different levels of linguistic representation (interactive alignment), which facilitates conversation and promotes rapport (Pickering & Garrod, 2004). However, speakers frequently engage in concurrent activities while in dialogue such as typing, reading or listening to programs. Is interactive alignment affected by concurrent participation in other activities that pose demands on working memory? In this talk, I will focus on alignment of structure, which happens as a result of structural priming (Branigan et al., 2000; Jaeger & Snider, 2013). Specifically, I will present three experiments investigating whether structural priming is affected by verbal working memory load. As a whole, the findings suggest that concurrent verbal working memory load may disrupt structural alignment at a (potentially) conceptual but not at a syntactic level of structural processing. Practically, they imply, however, that one might align less to one’s interlocutor if simultaneously scanning Facebook updates.
Please note that this is a version of a talk I presented at CUNY this year.

2013-05-14

Ups and downs in auditory development: Preschoolers discriminate contour but fall flat on audiovisual mapping

Sarah Creel Cognitive Science Department, UCSD

+ more

How do children hear the world? Previous research suggests that even infants are sensitive to pitch contour—the ups and downs in a periodic acoustic source. Contour sensitivity is presumed to form the basis for later perception of culture-specific musical patterns (e.g. the Western major scale), and for apprehending musical metaphors (“rising” pitches are upward motion). The current study shows that 4-5-year-old children, while they reliably distinguish contour differences, cannot use contour differences in an audiovisual mapping task. This is not due to general difficulty in associating nonspeech sounds with images. Results call into question the primacy of contour as a dimension of musical representation. Further, results mirror a phenomenon previously observed in word learning (Stager & Werker, 1997), wherein highly-discriminable percepts are difficult for children to associate with visual referents. Thus, difficulty in mapping similar-sounding words to referents may reflect more general difficulty in auditory-visual association learning, likely due to memory interference.
FYI: This is a version of a talk I have given in COGS 200 and the Psychology Cognitive Brownbag.

2013-05-07

Investigating the relations among components of language in typically developing children and children with neurodevelopmental disorders

Lara Polse Joint Doctoral Program, SDSU & UCSD

+ more

Language is a complex multifaceted system, and as we use spoken and written language we simultaneously recruit an array of interrelated linguistic subsystems. While these subsystems have been studied extensively during language acquisition, we know little about the organization and relations among these components in the school-age years. In this talk, I will present four investigations in which I use classically defined components of language (phonological, lexico-semantic, and syntactic) as well as components of reading (orthographic and semantic) as a tool to explore the relations amongst elements that comprise the language system in school aged typically developing children and children with neurodevelopmental disorders (aged 7-12). Investigating the composition of the language system in children with neurodevelopmental disorders that affect language will not only help to create more targeted interventions for these children, but will also provide a unique window through which to better understand the underlying structure and organization of language in typically developing children.

2013-04-30

Meaning Construction in the Embodied and Embedded Mind

Seana Coulson Cognitive Science Department, UCSD

+ more

In classical cognitive science, the body was merely a container for the physical symbol system that comprised the mind. Today, the body plays an increasingly important role in cognitive accounts as next generation cognitive scientists explore the idea that knowledge structures exploit partial reactivations of perceptual, motoric, and affective brain systems. First, the state of one’s own body might affect the way we understand other people’s emotional states as well as language about emotional events. Second, we might observe how other people move their bodies during speech in order to better understand their meaning. Third, we might attend to the way in which speakers’ gestures coordinate internal mental processes with external cultural inscriptions. Accordingly, I describe a series of behavioral and electrophysiological studies that address the real time comprehension of emotional language, iconic gestures in discourse about concrete objects and events, and environmentally coupled gestures in children’s discourse about mathematics.

2013-04-23

Semantic Preview Benefit in Reading: Type of Semantic Relationship Matters

Liz Schotter Psychology Department, UCSD

+ more

Reading is efficient because of the ability to start processing upcoming words before they are fixated (see Schotter, Angele, & Rayner, 2012 for a review). To demonstrate preprocessing of upcoming words, researchers use the gaze-contingent boundary paradigm (Rayner, 1975) in which a preview word changes to a target word during the saccade to it (using eye trackers to monitor fixation location and duration). Reading time measures on the target are compared between various related preview conditions and an unrelated control condition. Faster processing in a related condition compared to the unrelated condition suggests preview benefit—that information was obtained from the preview word parafoveally and used to facilitate processing of the target once it is fixated. While preprocessing of upcoming words at the orthographic and phonological is not controversial (i.e., is well-documented and accounted for in many models of reading), semantic preprocessing of upcoming words is debated (i.e., has mixed support in the literature and whether or not there is such an effect has been suggested as a means to distinguish between the two most prominent models of reading: E-Z Reader (e.g., Reichle, Pollatsek, Fisher & Rayner, 1998) and SWIFT (e.g., Engbert, Longtin, & Kliegl, 2002)). In this talk, I present two studies using the gaze-contingent boundary paradigm, demonstrating semantic preview benefit in English when the preview and target are synonyms, but not when they are semantically related, but not synonymous. I argue that the type of semantic relationship shared between the preview and target has a strong influence on the magnitude of preview benefit and I discuss this finding in relation to prior studies finding semantic preview benefit (in German and Chinese) and not finding it (in English).

2013-04-16

A dynamic view of language production

Gary Oppenheim Center for Research in Language, UCSD

+ more

In searching to understand how language production mechanisms work in the moment, we often forget how adaptable they are. In this talk, I'll present a high-level overview of some work that explores this adaptability on two timescales. The first part will focus on speakers' ability to take a system developed for communication and use it (perhaps predominantly) as a tool for thought: inner speech. Here I'll revisit Watson's (1913) claim that, "thought processes are really motor habits in the larynx." Then I'll consider adaptation on a longer timescale, with the idea that speakers achieve fluent production by continually re-optimizing their vocabularies with every word retrieval throughout their lives. Here I'll show that a simple incremental learning model naturally explains and predicts an array of empirical findings that our static models have struggled to explain for decades.

Note: This will be a rehearsal for an open-specialization faculty job talk that I'll present at Bangor University (Wales) on May 2. My goal is to polish it into the best 30-minute talk ever, so I would very much appreciate any constructive criticism.

2013-04-09

Experimental evidence for a mimesis-combinatoriality tradeoff in communication systems

Gareth Roberts Yeshiva University

+ more

Sign languages tend to represent the world less arbitrarily than spoken languages, exploiting a much richer capacity for mimesis in the manual modality. Another difference between spoken and signed languages concerns combinatoriality. Spoken languages are highly combinatorial, recombining a few basic forms to express an infinite number of meanings. While sign languages exhibit combinatoriality too, they employ a greater number of basic forms. These two differences may be intimately connected: The less a communication system mimics the world, the greater its combinatoriality. We tested this hypothesis by studying novel human communication systems in the laboratory. In particular we manipulated the opportunity for mimesis in these systems and measured their combinatoriality. As predicted we found that combinatoriality was greater when there was less opportunity for mimesis and, furthermore, that mimesis provided scaffolding for the construction of communication systems.

2013-04-02

Abstract knowledge vs direct experience in linguistic processing

Emily Morgan UCSD, Linguistics Dept.

+ more

Abstract linguistic knowledge allows us to understand novel expressions which we have never heard before. It remains an outstanding question, however, what role this abstract linguistic knowledge plays in determining processing difficulty for expressions that are _not_ novel--those with which the speaker has had direct experience. We investigate this in the case of "binomial expressions" of the form "X and Y". Many common binomial expressions have a preferred order (e.g. "bride and groom" vs "groom and bride"). These ordering preferences are predictable from a small number of linguistic factors. Alternately, preferences for commonly attested binomial expressions could be attributed to the frequency of speakers' direct experience with these expressions. Using a combination of probabilistic modeling and human behavioral experiments, we investigate the roles of both abstract linguistic constraints and direct experience in the processing of binomial expressions.

2013-03-12

Grounding speech with gaze in dynamic virtual environments

Matthew Crocker Saarland University, Germany

+ more

The interactive nature of dialogue entails that interlocutors are constantly anticipating what will be said next and speakers are monitoring the effects of their utterances on listeners. Gaze is an important cue in this task, potentially providing listeners with information about the speaker's next referent (Hanna & Brennan, 2007) and offering speakers some indication about whether listeners correctly resolved their references (Clark & Krych, 2004).
In this talk, I will first review some recent findings that quantify the benefits of speaker gaze (using a virtual agent) for human listeners. I will then present a new study which demonstrates that a model of speech generation that exploits real-time listener gaze – and gives appropriate feedback – enhances reference resolution by the listener: In a 3D virtual environment, users followed spoken directional instructions, including pressing a number of buttons that were identified using referring expression generated by the system (see GIVE; Koller et al., 2010). Gaze to the intended referent following a referring expression was taken as evidence of successful understanding and elicited positive feedback; by contrast, gaze to other objects triggered early negative feedback. We compared this eye movement-based feedback strategy with two baseline systems, revealing that the eye-movement based feedback leads to significantly better task performance the other two strategies, as revealed by a number of measure. From a methodological perspective, our findings more generally show that real-time listener gaze immediately following a referring expression reliably indicates how a listener resolved the expression, even in dynamic, task-centered, visually complex environments.

2013-03-05

Regularization behavior in a non-linguistic domain

Vanessa Ferdinand University of Edinburgh

+ more

Language learners tend to regularize variable input and some claim that this is due to a language-specific regularization bias. I will present the results of two frequency learning experiments in a non-linguistic domain and show that task demands modulate regularization behavior. When participants track multiple frequencies concurrently, they partially regularize their responses, and when there is just one frequency to track, they probability match from their input data. These results will be compared to matched experiments in the linguistic domain, and some pilot results will be presented. The goal here is to partial out the regularization behavior related to task demands (such as memory limitations), and that which may be due to domain-specific expectations of one-to-one mappings between variants and objects. A Bayesian model is fit to the experimental data to quantify regularization biases across experiments and explore the long-term cultural evolutionary dynamics of regularization and probability matching in relation to a null model, drift.

2013-02-26

Combinatorial structure and iconicity in artificial whistled languages

Tessa Verhoef University of Amsterdam - ACLC

+ more

Duality of patterning, one of Hockett's (1960) basic design features of language, has recently received increased attention (de Boer, Sandler, & Kirby, 2012). This feature describes how, in speech, a limited number of meaningless sounds are combined into meaningful words and those meaningful words are combined into larger constructs. How this feature emerged in language is currently still a matter of debate, but it is increasingly being studied with the use of a variety of different techniques, including laboratory experiments. I will present a new experiment in which artificial languages with whistle words for novel objects are culturally transmitted in the laboratory. The aim of this study is to extend an earlier study in which it was shown that combinatorial structure emerged in sets of meaningless whistles through cultural evolution. In the new study meanings are attached to the whistle words and this further investigates the origins and evolution of combinatorial structure. Participants learned the whistled language and reproduced the sounds with the use of a slide whistle. Their reproductions were used as input for the next participant. Two conditions were studied: one in which the use of iconic form-meaning mappings was possible and one in which the use of iconic mappings was experimentally made impossible, so that we could investigate the influence of iconicity on the emergence of structure.

2013-02-19

On defining image schemas

Jean Mandler

+ more

There are three different kinds of cognitive structure that have not been differentiated in the cognitive linguistic literature. They are spatial primitives, image schemas, and schematic integrations. Spatial primitives are the first conceptual building blocks formed in infancy, image schemas are simple spatial stories built from them, and schematic integrations use the first two types to build concepts that include nonspatial elements, such as force and emotion. These different kinds of structure have all come under the umbrella term of image schemas. However, they differ in their content, developmental origin, imageability, and role in meaning construction.

2013-02-05

Explaining "I can't draw": Parallels in the structure and development of drawing and language

Neil Cohn

+ more

Why is it that many people feel that they "can't draw"? Both drawing and language are fundamental and unique to humans as a species. Just as language is a representational system that uses systematic sounds (or manual/bodily signs) to express concepts, drawing is a means of graphically expressing concepts. Yet, unlike language, we consider it normal for people not to learn to draw, and consider those who do to be exceptional. I argue that the structure and development of drawing are indeed analogous to that of language, and that most people who "can't draw" have a drawing system parallel with the resilient systems of language that appear when children are not exposed to a linguistic system within a critical developmental period (such as "homesign").

2013-01-29

Reasoning with Diagrams in Chronobiology

William Bechtel

+ more

Diagrams are widely used to communicate in biology. But what other functions do they play? I will argue that they are often the vehicles of reasoning, both for individuals and collectives. They serve to characterize and conceptualize the phenomenon to be explained. The construction and revision of diagrams is central to the activities of proposing and revising mechanistic explanations of the phenomenon. To illustrate these roles, I will focus on research on circadian rhythms, endogenously generated rhythms of approximately 24-hours that regulate a large range of biological phenomena across all orders of life. Visual representations are crucial to understanding the periodicity and entrainment of these oscillations and to reasoning about the complex interacting feedback mechanisms proposed to explain them.

2013-01-22

Building Meanings: The Computations of the Composing Brain

Liina Pylkkänen New York University

+ more

Although the combinatory potential of language is in many ways its defining characteristic, our understanding of the neurobiology of composition is still grossly generic: research on the brain bases of syntax and semantics implicates a general network of “sentence processing regions” but the computational details of this system have not been uncovered. For language production, not even a general network has yet been delineated. Consequently, the following two questions are among the most pressing for current cognitive neuroscience research on language:
(i) What is the division of labor among the various brain regions that respond to the presence of complex syntax and semantics in comprehension? What are the computational details of this network?
(ii) How does the brain accomplish the construction of complex structure and meaning in production? How do these processes relate to parallel computations in comprehension?
In our research using magnetoencephalography (MEG), we have systematically varied the properties of composition to investigate the computational roles and spatiotemporal dynamics of the various brain regions participating in the construction of complex meaning. The combinatory network as implicated by our research comprises at least of an early (~200-300ms), computationally specialized contribution of the left anterior temporal lobe (LATL) followed by later and more general functions in the ventromedial prefrontal cortex (vmPFC) and the angular gyrus (AG). The same regions appear to operate during production but in reverse order. In sum, contrary to hypotheses that treat natural language composition as monolithic and localized to a single region, the picture emerging from our work suggests that composition is achieved by a network of regions which vary in their computational specificity and domain generality.

2013-01-08

Complexity is not Noise: Using Redundancy and Complementarity in the Input to Simplify Learning

Jon A. Willits Indiana University

+ more

Language acquisition has often been cast as an enormously difficult problem, requiring innate knowledge or very strong constraints for guiding learning. I will argue that this alleged difficulty arises from a mischaracterization of the learning problem, whereby it is assumed (implicitly, at least) that language learners are solving a set of independent problems (e.g. word segmentation, word-referent mappings, syntactic structure). In fact, these problems are not independent, and children are learning them all at the same time. But rather than this making language acquisition even more difficult, these interactions immensely simplify the learning problem, by allowing children to take what they have learned in one domain and use it to immediately constrain learning in others. In this talk, I will focus on interactions between the lexicon and syntactic structure, and discuss corpus analyses, computational models, and behavioral experiments with infants and adults. These studies will demonstrate how redundancy and complementarity in the input help children and adults solve a number of learning and comprehension problems, such as learning syntactic nonadjacent dependencies via semantic bootstrapping, and dealing with interactions between semantic and syntactic structure in language processing.

2012-12-04

Learnability of complex phonological interactions: an artificial language learning experiment

Mike Brooks, Bozena Pajak, and Eric Bakovic

+ more

What inferences do learners make based on partial language data? We investigated whether exposure to independent phonological processes in a novel language would lead learners to infer their interaction in the absence of any direct evidence in the data. Participants learned to form compounds in an artificial language exhibiting independently-triggered phonological processes, but the potential interaction between them was withheld from training. Unlike control participants trained on a near-identical language without this potential, test participants rated critical items exhibiting the interaction as significantly more well-formed than control items, suggesting that they were able to generalize beyond the observed language properties.

2012-11-27

Mapping linguistic input onto real-world knowledge in online language comprehension

Ross Metusalem

+ more

Comprehending language involves mapping linguistic input onto knowledge in long term memory. This talk will discuss two studies, one complete and one at its outset, investigating this mapping as it occurs during incremental comprehension. Specifically, the studies examine the activation of unstated knowledge regarding described real-world events. The talk will begin by briefly discussing an ERP study finding that the N400 elicited by a locally anomalous word (e.g., They built a jacket in the front yard) is reduced when that word is generally associated with the described event (Metusalem, Kutas, Urbach, Hare, McRae, & Elman, 2012). This is taken to indicate that online comprehension involves activation of unstated knowledge beyond that which would constitute a coherent continuation of the linguistic input. The talk will then turn to an upcoming study that will utilize both Visual World eye-tracking and ERP experiments to probe knowledge activation as a discourse unfolds through time, with the aim of addressing specific issues regarding how linguistic input dynamically modulates knowledge activation during online comprehension.

2012-11-20

Much ado about not(hing)

Simone Gieselman

+ more

Negative sentences such as Socrates didn't like Plato are thought to come with a large processing cost in comparison to their corresponding positive counterparts, such as Socrates liked Plato. This is reflected in longer reading and reaction times, higher error rates, larger brain responses and greater cortical activation for negative versus positive sentences. From the perspective of everyday language use, this is surprising, because we use negation frequently, and mostly with apparent ease. Many studies have attempted to shed light on the reason for the processing cost of negation but so far, the "negation puzzle" hasn't been solved.

In this talk, I present a series of reading-time studies showing that if we control the context of positive and negative sentences in a clear and precise way, we can manipulate whether the "negation effect" appears or not. On the basis of these results, I argue that negative sentences generally aren't harder to process than positive sentences. Depending on the context of an utterance, negative sentences may be less informative than positive sentences (the opposite may also be true) and thus require additional inferential processing on the part of the comprehender to understand what the intended world is like. I argue that these additional inferential processes have previously been conflated with an inherent processing cost of negation.

2012-11-13

Storage and computation in syntax: Evidence from sentence production priming studies

Melissa Troyer

+ more

In morphology, researchers have provided compelling evidence for the storage of compositional structures that could otherwise be computed by rule. In syntax, evidence of storage of fully compositional structures has been less forthcoming. We approach this question using syntactic priming, a method exploiting the tendency of individuals to repeat recently produced syntactic structures. We investigate relative clauses (RCs), which are syntactically complex but are nevertheless frequent in natural language. Across three experiments, we observe that priming of object-extracted RCs is sensitive to a) the type of noun phrase in the embedded subject position (a full NP vs. a pronoun), and b) the type of relative pronoun (who vs. that). This suggests that the representations of some types of RCs involve storage of large units that include both syntactic and lexical information. We interpret these results as supporting models of syntax that allow for complex mixtures of stored items and computation.

2012-10-30

All in my mind: language production, speech errors, and aging

Trevor Harley University of Dundee

+ more

What happens to language skills as we age? In particular, what happens to the skills that enable us to manipulate our own language processes? I present data from several studies on changes in phonological awareness in normal and pathological aging (mainly concerning individuals with Parkinson's disease). I relate the results to model of lexical access in speech production and of the executive control of language. I also discuss the nature of a general phonological deficit and how aging can mimic its effects. Primarily though I ask: what is wrong with my language production?

2012-10-23

Impossible to Ignore: Phonological Inconsistency Slows Vocabulary Learning

Sarah Creel

+ more

Though recent work examines how language learners deal with morphosyntactic input inconsistency, few studies explore learning under phonological inconsistency. The predominant picture of phonological acquisition is that young learners encode native-language speech sound distributions, and these distributions--phonemes--then guide lexical acquisition. Yet most children’s phonological experiences, even within a language, contain variability due to regional dialect variation, L2 speakers, and casual speech, potentially generating seemingly-different phonological realizations of the same word. Do learners merge variant word forms, or store each variant separately? To distinguish between these possibilities, children (ages 3-5) and adults learned words with or without phonological inconsistency. Both children and adults showed increased difficulty when learning phonologically inconsistent words, suggesting they do not merge speech-sound category variability. Data are more consistent with learning separate forms, one per accent, though this appears easier than learning two completely-different words. Ongoing work explores real-world accent variation.

2012-10-16

Why do your lips move when you think?

Gary Oppenheim

+ more

When you imagine your own speech, do you think in terms of the motor movements that you would use to express your speech aloud (e.g. Watson's 1913 proposal that "thought processes are really motor habits in the larynx"), or might this imagery represent more abstract phonemes or words? Inner speech is central to human experience, often stands in for overt speech in laboratory experiments, and has been implicated in many psychological disorders, but it is not very well understood. In one line of work (Oppenheim & Dell, 2008; 2010; Oppenheim, 2012; in press; Dell & Oppenheim, submitted), I have examined phonological encoding in inner speech, trying to identify the form of the little voice in your head. Here I've developed a protocol to examine levels of representation in inner speech by comparing distributions of self-reported errors in inner speech to those in overt speech, and used both process (neural network) and analytical (multinomial processing tree) models to relate the differences in error patterns to differences in the underlying processes. The work indicates that inner speech represents a relatively abstract phoneme level of speech planning (Oppenheim & Dell, 2008), but is flexible enough to incorporate further articulatory information when that becomes available (Oppenheim & Dell, 2010). For example, silently mouthing a tongue-twister leads one to 'hear' different errors in their inner speech. Aside from addressing the initial questions about inner speech, this work has constrained theories of self-monitoring in overt speech production (Oppenheim & Dell, 2010; Oppenheim, in press) and provided crucial evidence for the role of abstract segmental representations (Dell & Oppenheim, submitted).

This talk will primarily focus on the empirical work, but I can address additional issues as time and interest allows. For instance, recent challenges to our 2008 claims (e.g. from Corley, Brocklehurst, & Moat, 2011), though overstated, have inspired a more general account of the relationship between error rates and 'good' error effects that is backed by both computational modeling and empirical data (Oppenheim, 2012; Dell & Oppenheim, submitted): because speech errors are over-determined error effects tend to be stronger (as odds ratios) when production is more accurate, but the resultantly rare errors may provide less statistical power to detect error effects.

2012-10-09

The Grammar of Visual Narratives: Structure, Meaning, and Constituency in Comics

Neil Cohn

+ more

Comics are a ubiquitous form of visual narrative in contemporary society. I will argue that, just as syntax allows us to differentiate coherent sentences from scrambled strings of words, the comprehension of sequential images in comics also uses a grammatical system to distinguish coherent narrative sequences from random strings of images. First, I will present a theoretical model of the narrative grammar underlying comics—a hierarchic system of constituent structure that constrains the sequences of images. I then will provide an overview of recent research that supports the psychological validity of this grammar, using methods from psycholinguistics and cognitive neuroscience. In particular, I will emphasize that neurophysiological responses that appear to violations of syntax and semantics in sentences appear to violations of narrative and semantics in the sequential images of comics. Finally, I consider what ramifications a narrative grammar of sequential images has on theories of verbal narrative and language in general.

2012-06-05

The impact of language and music experience on talker identification

Micah Bregman

+ more

Speech is typically studied for its role transmitting meaning through words and syntax, but it also provides rich cues to talker identity. Acoustic correlates of talker identity are intermingled with speech sound information, making talker recognition a potentially difficult perceptual learning problem. We know little about how listeners accomplish talker recognition, though several previous studies suggest a role for language familiarity and phonological processing. In this talk, I will present the results of a recent study with professor Sarah Creel where we asked whether bilingual and monolingual listeners learned voices more rapidly as a function of language familiarity and age of acquisition. We observed an interaction with language background: Korean-English bilinguals learned to recognize Korean talkers more rapidly than they learned English talkers, while English-only participants learned English talkers faster than they learned Korean talkers. Further, bilinguals' learning speed for talkers in their second language (English) correlated with how early they began learning English. Individuals with extensive musical experience learned to recognize voices in their non-dominant language faster than those with less musical experience. Taken together, these results suggest that individual differences in language experience and differences in auditory experience (or ability) affect talker encoding.

2012-05-22

Using hands and eyes to investigate conceptual representations: Effects of spatial grouping and event sequences on language production

Elsi Kaiser Department of Linguistics, University of Southern California

+ more

In this talk, I present some of our recent work investigating how the human mind represents (i) relations between events in different domains (using priming to probe effects of motor actions on discourse-level representations) and (ii) relations between objects in different domains (effects of grouping in the visual domain and in language, on the prosodic level). Segmenting stimuli into events and understanding the relations between those events is crucial for understanding the world. For example, on the linguistic level, successful language use requires the ability to recognize semantic coherence relations between events (e.g. causality, similarity). However, relatively little is known about the mental representation of discourse structure. I will present experiments investigating whether speakers’ choices about event-structure and coherence relations between clauses are influenced by semantic relations represented by preceding motor actions (especially causality), and the event-structure of such motor-action sequences. These studies used a priming paradigm, where participants repeated a motor action modeled by the experimenter (e.g. roll a ball towards mini bowling pins to knock them over), and then completed an unrelated sentence-continuation task. In addition, I will investigate the question of cross-domain representations from another angle: I will present a study that investigates the relation between abstract relations in the domain of prosody (prosody grouping and prosodic boundaries) and relations in the visual domain (grouping objects). As a whole, our findings provide new information about the domain-specificity vs. domain-generality of different kinds of representations. In the domain of events, our findings point to the existence of structured representations which encode fine-grained details as well as information about broader connections between classes of coherence relations, and suggest that motor actions can activate richly-encoded representations that overlap with discourse-level aspects of language. In the visual domain, our findings suggest that linguistic and visual representations interface at abstract level, reflecting cognitive structuring, not the detailed physical dimensions of either speech or visual information.

2012-05-15

Evolving The Direct Path In Praxis As A Bridge To Duality Of Patterning In Language

Michael Arbib USC Neuroscience

+ more

We advance the Mirror System Hypothesis (Arbib, 2012: How the Brain Got Language: The Mirror System Hypothesis. Oxford University Press) by offering a new neurologically grounded theory of duality of patterning in praxis and show how it serves complex imitation and provides an evolutionary basis for duality of patterning in language.

2012-05-08

Sequential vs. hierarchical models of human incremental sentence processing

Victoria Fossum

+ more

Experimental evidence demonstrates that syntactic structure predicts observed reading times during human incremental sentence processing, above and beyond what can be accounted for by word-level factors alone. Despite this evidence, open questions remain: which type of syntactic structure best explains observed reading times-–hierarchical or sequential, and lexicalized or unlexicalized? One previous study found that lexicalizing syntactic models does not improve prediction accuracy. Another more recent study found that sequential models predict reading times better than hierarchical models, and concluded that the human parser is insensitive to hierarchical syntactic structure. We investigate these claims, and find a picture more complicated than the one presented by previous studies. Our findings show that lexicalization does improve reading time prediction accuracy after all, and that the claim that the human parser is insensitive to hierarchical syntactic structure is premature.

2012-05-01

Workshop: Writing papers for publication

Victor Ferreira

+ more

Yes, it's true: you're about to enter the ranks of the elite paper submitter. In this workshop, Vic (with a little help from his friends) will be sharing thoughts about strategies for getting your research published. Don't miss it!

2012-04-24

Not Lost in Translation: Learning about words in a sea of sound

Suzanne Curtin Department of Psychology, University of Calgary

+ more

Learning about words is one of the key building blocks of language acquisition. To learn a new word, infants begin by identifying the relevant sound pattern in the speech stream. Next, they encode a sound representation of the word, and then establish a mapping between the word and a referent in the environment. Despite the apparent complexity of this task, infants move from a productive vocabulary of about 6 words at 12 months to a vocabulary of over 300 words by 24 months. In this talk I will discuss some of the ways in which young infants use the phonological information in the speech signal to map words to meaning. Specifically, I will present research exploring how knowledge of the sound system established over the first year of life influences infants’ mapping of words to objects and events.

2012-04-17

How the self-domestication hypothesis can help to solve language evolution puzzles

Robert Kluender

+ more

Most proposals for the evolution of language center around the so-called discontinuity paradox: while human language has to have come from somewhere evolutionarily (i.e. ought to be reconstructably continuous with the communicative behavior of other species), it nonetheless appears to exhibit sharp qualitative differences (i.e. discontinuities) from other known systems of animal communication. Typically, those uncomfortable with the notion of human language as an evolutionary accident or "spandrel" have been forced to adopt a gradualist, continuous view of language evolution, a rather difficult position to defend given the mass extinction of -- and consequent absence/paucity of relevant evidence from -- all other known hominin species.

Recently, much attention has been paid to the surprising yet consistent morphological and behavioral discontinuities that emerge in various unrelated species under human intervention via domestication, and by hypothesis in certain wild species under proposed processes of "self-domestication". In this talk I review these separate proposals and juxtapose them in a way that reveals a number of appealing solutions to long-standing, thorny conceptual problems in the evolution of language. Aside from obvious implications for human socialization and enculturation, I argue that self-domestication in the hominin lineage could help to account for not only the otherwise mysterious descent of the larynx, but also precisely for those puzzling facts that modern, discontinuous views of language arose to address in the first place: namely, the "overcomplexity" of human language (Saussure) and the ease with which it is acquired at remarkably early stages of human development, when cognitive ability is otherwise severely limited (so-called critical period effects).

2012-04-10

ERPs for Gender Processing in French adults and children: task and age effects

Phaedra Royle École d'orthophonie et d'audiologie, Université de Montréal

+ more

This talk presents the first study of auditory gender processing in French using ERPs. In order to study the development of gender agreement processing in French children, we developed an ERP study using an auditory-visual sentence-picture matching paradigm for French noun-phrase (DP) internal agreement. This is an aspect of French that can be difficult to master, due to the idiosyncrasy of gender marking, which has also proven difficult for children with language impairment. We used the ERP paradigm in order to tap into ongoing language processing while obviating the use of grammaticality judgment, which can be difficult to use with young children. A first study established the paradigm with adult data, while controlling for task effects. A second study piloted the paradigm with children.

2012-04-03

Cross-Modal Mechanisms in Situated Language Processing

Moreno Coco School of Informatics, University of Edinburgh

+ more

Most everyday tasks require different cognitive processes to exchange, share, and synchronize multi-modal information. In my eye-tracking research, I focus on the mechanisms underlying the synchronous processing of visual and linguistic information during language production tasks, such as object naming or image description in photo-realistic scenes.

In this talk, I first discuss the interplay between low level (e.g., visual saliency) and high level information (e.g., contextual congruency) during object naming. Then, I move to the more complex linguistic task of scene description. In contrast to the previous literature, my results show the co-existence of three components of visual guidance (perceptual, conceptual, and structural) which interact with sentence processing. Based on this finding, I outline a novel approach to quantifying the cross-modal similarity of visual and linguistic processing. In particular, I demonstrate that the similarity between visual scan patterns correlates with the similarity between sentences, and that this correlation can be exploited to predict sentence productions based on associated scan patterns.

2012-03-13

From Shared Attention to Shared Language: Results From a Longitudinal Investigation of Early Communication

Gedeon Deák

+ more

The literature on child language offers a bewildering array of data on the emergence of early language. There is evidence that prelinguistic social development, infants' own information-processing capacities, and richness of the language environment jointly explain the wide range of language skills seen in 1- and 2-year-old toddlers. There are, however, few studies that investigate all three factors (prelinguistic social skills, cognitive capacities, and language input) in tandem. I will describe preliminary findings from a study that does just that: a longitudinal sample of infants followed from 3 to 22 months. I will focus on individual differences in infants' attention-sharing skills in controlled tasks (mostly from 9 to 12 months), on maternal naturalistic speech variability, including amount of talk, diversity of vocabulary, use of "mental verbs," and discourse markers of 2nd-person address (i.e., infant's name and "you"). I will describe relations among those variables, and indicate which ones uniquely predict language skills at 12 and 18 months.

2012-03-06

Can native-language perceptual bias facilitate learning words in a new language?

Bożena Pająk
(work in collaboration with Sarah Creel & Roger Levy)

+ more

Acquiring a language relies on distinguishing the sounds and learning mappings between meaning and phonetic forms. Yet, as shown in previous research on child language acquisition, the ability to discriminate between similar sounds does not guarantee success at learning words contrasted by those sounds. We investigated whether adults, in contrast to young infants, are able to attend to phonetic detail when learning similar words in a new language. We tested speakers of Korean and Mandarin to see whether they could use their native-language-specific perceptual biases in a word-learning task. Results revealed that participants were not able to fully capitalize on their perceptual abilities: only better learners -- as independently assessed by baseline trials -- showed enhanced learning involving contrasts along phonetic dimensions used in their native languages. This suggests that attention to phonetic detail when learning words might only be possible for adults with better overall perceptual abilities, better learning skills, or higher motivation.

2012-02-28

Cumulative semantic interference persists even in highly constraining sentences

Dan Kleinman

+ more

When speakers engage in conversation, they often talk about multiple members of the same semantic category. Given this, it seems inefficient that subjects name pictures (e.g., cow) slower when they have previously named other (and more) members of the same semantic category (horse, pig; Howard et al., 2006). Of course, in normal speech, words are typically produced in rich semantic contexts. In my talk, I will present the results of two experiments that investigate whether this cumulative semantic interference effect (CSIE) persists even when pictures are presented in such a context; i.e., after high-cloze sentences.

In each of two experiments, 80 subjects named 60 critical pictures, comprising 12 semantic categories of five pictures each, in two blocks. In both blocks of Experiment 1 and the first block of Experiment 2, half of the pictures in each block were presented in isolation; the other half were preceded by high-cloze sentences presented via RSVP with the last word omitted (e.g., "On the class field trip, the students got to milk a ___"). In the second block of Experiment 2, every picture was presented in isolation.

Results from both experiments showed that although pictures were named nearly 200 ms faster in the sentence condition relative to the bare condition, CSIEs of equivalent size were observed within both conditions. Furthermore, Experiment 1 showed that this interference fully transferred between conditions: Naming cow slowed the subsequent naming of horse equally regardless of whether cow or horse were named in isolation or after a sentence. However, Experiment 2 showed that despite equivalent interference effects, pictures that were named after sentences in the first block (compared with pictures that were named in the bare condition in the first block) exhibited less repetition priming in the second block.

Three conclusions can be drawn from these results. First, they demonstrate that cumulative semantic interference persists -- undiminished in size -- even when pictures are named in richer semantic contexts, suggesting that CSI might affect more naturalistic speech. Second, they run counter to the predictions of Howard et al. (2006), whose model of CSI involves competitive lexical selection and incorrectly predicts that trials with faster naming latencies will show less interference; but comport with the error-based learning account of CSI advanced by Oppenheim et al. (2010). Third, the results potentially shed light on the nature of cloze, since Oppenheim et al. (2010) can explain the pattern of decreased repetition priming and unchanged CSIE in the sentence condition if the sentences used in Experiments 1 and 2 increased target activation while leaving competitor activation unchanged.

2012-02-21

The Object of Whose Hands? Empathy and Movement in the Work of Literary Studies

Stephanie Jed Department of Literature

+ more

I investigate terms/concepts such as grasping, attention, representation, space, event, and intersubjectivity as they are embodied in literary studies and cognitive science research. My intent is: to theorize, in concrete ways, how our hands form part of the interpretive field, and to explore the viability of cross-disciplinary research between literature/history and cognitive science.

2012-02-14

Niche Construction and Language Evolution

Hajime Yamauchi University of Berkeley

+ more

Like other new scientific enterprises, studies within evolutionary linguistics vary widely. While some argue that language owes its phylogenetic explanation to simple brain evolution (i.e., biological evolution); others promote a view that language is a complex meme replicated through acquisition, and hence has evolved to be a better replicator for the brain (cultural evolution). These divisions reflect the notorious polarization of the nature-nurture problem. Unlike traditional linguistics, however, the intersection of the two camps, known as brain-language coevolution, is where the most exciting findings are expected. Unfortunately, however, despite its promising perspective, studies in this domain have been conspicuously lacking.

In this presentation, I will discuss language acquisition as a key aspect of this coevolutionary process: it is a "differential gear" connecting the two wheels revolving in different timescales (i.e., biological and cultural evolution). With a computer simulation, I will demonstrate that language entails a modification not only on the selective environment, but also on the learning environment; the learning environment in one generation is dynamically created by the previous generations' linguistic activities (and itself forms a selective environment). If such modifications on the learning environment affect learnability of a given language, and hence the cost of learning, it will induce an evolutionary process on language acquisition.

2012-02-07

How our hands help us think

Susan Goldin-Meadow

+ more

When people talk, they gesture. We now know that these gestures are associated with learning. They can index moments of cognitive instability and reflect thoughts not yet found in speech. What I hope to do in this talk is raise the possibility that gesture might do more than just reflect learning––it might be involved in the learning process itself. I consider two non-mutually exclusive possibilities: the gestures that we see others produce might be able to change our thoughts; and the gestures that we ourselves produce might be able to change our thoughts. Finally, I explore the mechanisms responsible for gesture's effect on learning––how gesture works to change our minds.

2012-01-31

Language, Sensori-Motor Interfaces, and Time: Temporal Integration Windows in the Perception of Signed and Spoken Languages

So-One Hwang

+ more

Linguistic structures are processed in time, whether listening to acoustic speech or viewing the visual input of sign language. In this talk, I will discuss the perceiver's sensitivity to the rate at which linguistic form and meaning unfold for integrating the sensory input in time chunks. The duration or size of time windows for integrating the input is tested by measuring the intelligibility of locally-reversed sentences in American Sign Language and making comparisons with findings from speech. In a series of three perceptual experiments, the results demonstrate 1) the impact of modality (auditory versus visual processing) on the duration of temporal integration windows, where visually based ASL is dramatically more resistant to this temporal distortion than spoken English and involves longer time-windows for integration, 2) modality independent properties of temporal integration where duration is directly linked with the rate of linguistic information in both signed and spoken languages, and 3) the impact of age of language acquisition on temporal processing. These findings have implications for the neurocognitive underpinnings of integration in perception and rates in production and the role of input in early development for these aspects of language processing.

2012-01-24

What You Expect When You're Expecting: Listener Modeling of Speakers in Language Comprehension

Rachel Ostrand

Recruiting auditory space to reason about time

Esther Walker

Testing phonological organization in bilinguals: An event-related brain potential study

Carson Dance

2012-01-17

Neural Correlates of Auditory Word Processing in Infants and Adults

Katie Travis

+ more

Although infants and adults both learn and experience words frequently in the auditory modality, much more is known about the neural dynamics underlying visual word processing. Even more limited is knowledge of the brain areas supporting developing language abilities in infants. In this talk, I will describe findings from three related studies that help to advance current understanding of neurophysiological processing stages and neural structures involved in auditory word processing in both the developing and mature brain. Briefly, the first study I will present reveals new evidence from adults for an early neural response that is spatially and temporally distinct from later, well-established neural activity thought to index the encoding of lexico-semantic information (N400). The second study I will describe finds evidence to suggest that infants and adults share similar neurophysiological processes and neuroanatomical substrates for spoken word comprehension. Finally, I will discuss results from a third study in which we find evidence for neuroanatomical structural changes within cortical areas thought to be important for word understanding in 12-19 month old infants.

2011-11-29

Cultural emergence of combinatorial structure through iterated learning of whistled

Tessa Verhoef University of Amsterdam

+ more

In human speech, a finite set of basic sounds is combined into a (potentially) unlimited set of well-formed morphemes. Hockett (1960) termed this phenomenon 'duality of patterning' and included it as one of the basic design features of human language. Of the 13 basic design features Hockett proposed, duality of patterning is the least studied and it is still unclear how it evolved in language. Hockett suggested that a growth in meaning space drove the emergence of combinatorial structure: If there is a limit on how accurately signals can be produced and perceived, there is also a limit to the number of distinct signals that can be discriminated. When a larger number of meanings need to be expressed, structured recombination of elements is needed to maintain clear communication. However, it has been demonstrated in the case of a fully functional and expressive new sign language that it has still emergent combinatorial structure (Sandler et al., 2011). This case questions whether the emergence of combinatorial structure is necessarily driven by a growing meaning space alone. Furthermore, experimental work on the emergence of combinatorial structure in written symbols (del Giudice at el., 2010), as well as work I will present in this talk, show that this structure can emerge through cultural transmission, even in the case of a small vocabulary. It seems therefore to be an adaptation to human cognitive biases rather than a response to a growth in vocabulary size. In these experiments we use the method of experimental iterated learning (Kirby et al., 2008), which allows investigating cultural transmission in the laboratory. This method simulates iterated learning and reproduction, in which the language a participant is trained on is the recalled output that the previous participant produced. The experiment I will present investigates the emergence of combinatorial structure in an artificial whistled language. Participants learn and recall a system of sounds that are produced with a slide whistle, an instrument that is both intuitive and non-linguistic so that interference from existing experience with speech is blocked. I show from a series of experiments that transmission from participant to participant causes the system to change and become cumulatively more learnable and more structured. Interestingly, the basic elements that are recombined are comprised of articulatory movements rather than acoustic features.

References

del Giudice, A., Kirby, S., & Padden, C. (2010). Recreating duality of patterning in the laboratory: a new experimental paradigm for studying emergence of sublexical structure. In A. D. M. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Evolang8 (pp. 399-400). World Scientific Press.

Hockett, C. (1960). The origin of speech. Scientific American, 203, 88-96.

Kirby, S., Cornish, H., & Smith, K. (2008). Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences, 105(31), 10681-10686.

Sandler, W., Aronoff, M., Meir, I., & Padden, C. (2011). The gradual emergence of phonological form in a new language. Natural Language & Linguistic Theory. 29(2), 503-543

2011-11-22

The relationship between referential and syntactic dependency processing

Christopher Barkley

+ more

Natural language is full of long-distance relationships in which two non-local elements depend on each other for full interpretation. In the event-related brain potential (ERP) literature, so called filler-gap dependencies ("Which movie would you like to see __?") have received the bulk of the attention, and a relatively consistent picture has emerged in terms of the ERP responses associated with the processing of these types of syntactic relationships. Here we investigate another type of long-distance dependency, namely the co-referential relationship between a pronoun and its antecedent ("John thought that he should probably see the movie too").

Previous studies of referential processing have relied on violating various morpho-syntactic features of the pronoun-antecedent relationship such as agreement or binding (Osterhout & Mobley, 1995), or on manipulating ambiguity of reference (Van Berkum et al., 1999, 2003, 2007), but none have investigated the brain’s response to simple and unambiguous relationships between pronouns and their antecedents as we do here. We hypothesized that, while syntactic and referential dependencies have been analyzed very differently and kept maximally separate in the theoretical linguistics literature, they pose the same basic processing challenges for the brain, and therefore that similar brain responses should be observed in response to the second element in each dependency type. Our results revealed an interesting pattern of similarities and differences across dependency types, and will be discussed in terms of the relationship between syntactic and referential dependency formation and with regard to the functional identity of the left anterior negativity (LAN). They will also be placed in the context of the extant ERP literature on referential processing, and discussed in terms of the potentially fruitful bi-directional relationship between processing data and linguistic theory construction.

2011-11-15

Acquiring a first language in adolescence: Behavioral and neuroimaging studies in American Sign Language

Naja Ferjan Ramirez

+ more

What is the process of language acquisition like when it begins for the first time in adolescence? Is the neural representation of language different when acquisition first begins at an older age? These questions are difficult to answer because language acquisition in virtually all hearing children begins at birth. However, among the deaf population are individuals who have been cut-off from nearly all language until adolescence; they cannot hear spoken language and, due to anomalies in their upbringing, they have not been exposed to any kind of sign language until adolescence. I will first discuss the initial language development of three deaf adolescents who, due to anomalies in upbringing, began to acquire American Sign Language (ASL) as their first language (L1) at age 14 years. Using the ASL-CDI and detailed analyses of spontaneous language production we found that adolescent L1 learners exhibit highly consistent patterns of lexical acquisition, which are remarkably similar to child L1 learners. The results of these behavioral studies were then used to create the stimuli for a neuroimaging experiment of these case studies. Using anatomically constrained magnetoencephalography (aMEG), we first gathered pilot data by investigating the neural correlates of lexico-semantic processing in deaf native signers. Results show that ASL signs evoke a characteristic event-related response peaking at ~400 ms post-stimulus onset that localizes to a left- lateralized fronto-temporal network. These data agree with previous studies showing that, when acquired from birth, the localization patterns of ASL processing are similar to those of spoken language. Using the same experimental protocol we then neuroimaged two cases who had no childhood language and found that their brain responses to ASL signs look remarkably different from those of native signers, indicating that delays in language acquisition severely affect the neural patterns associated with lexico-semantic encoding. Our results suggest that language input in early childhood, spoken or signed, is critical for establishing the canonical left-hemisphere semantic network.

2011-11-08

Why would musical training benefit the neural encoding of speech? A new hypothesis

Aniruddh Patel

+ more

Mounting evidence suggests that musical training benefits the neural encoding of speech. This paper offers a hypothesis specifying why such benefits occur. The "OPERA" hypothesis proposes that such benefits are driven by adaptive plasticity in speech-processing networks, and that this plasticity occurs when five conditions are met. These are: (1) Overlap: there is anatomical overlap in the brain networks that process an acoustic feature used in both music and speech (e.g., waveform periodicity, amplitude envelope), (2) Precision: music places higher demands on these shared networks than does speech, in terms of the precision of processing, (3) Emotion: the musical activities that engage this network elicit strong positive emotion, (4) Repetition: the musical activities that engage this network are frequently repeated, and (5) Attention: the musical activities that engage this network are associated with focuse d attention. According to the OPERA hypothesis, when these conditions are met neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. Yet since speech shares these networks with music, speech processing benefits. The OPERA hypothesis is used to account for tdhe observed superior subcortical encoding of speech in musically trained individuals, and to suggest mechanisms by which musical training might improve linguistic reading abilities.

2011-11-01

Rhythm classes in speech perception

Amalia Arvaniti

+ more

A popular view of rhythm divides languages into three rhythm classes, stress-, syllable- and mora-timing. Although this division has not been supported by empirical evidence from speech production (e.g. Arvaniti, 2009; Arvaniti, to appear), it has been generally adopted in the fields of language acquisition and processing based on perception experiments that appear to support the notion of rhythm classes. However, many of the perceptual experiments are amenable to alternative interpretations. Here this possibility is explored by means of a series of perception experiments. In the first two experiments, listeners were asked to indirectly classify impoverished stimuli from English, German, Greek, Korean, Italian and Spanish by rating their similarity to non-speech trochees (the closest non-speech analog to stress-timing). No evidence was found that listeners rated the languages across rhythm class lines; results differed depending on the type of manipulation used to disguise language identity (in experiment 1, low-pass filtering; in experiment 2, flat sasasa in which consonantal intervals are turned into [s], vocalic ones into [a] and F0 is flattened). In a second series of five AAX experiments English was compared to Polish, Spanish, Danish, Korean and Greek in a 2*2 design: the (sasasa) stimuli either retained the tempo (speaking rate in syllables per second) of the original utterances or had all the same tempo (average of the two languages in each experiment); F0 was either that of the original utterances or flattened. Discrimination was based largely on tempo, not rhythm class, while the role of F0 depended on tempo: when tempo differences were large, F0 hindered discrimination but when they were small it enhanced discrimination for the pairs of languages that differ substantially in F0 patterns (especially English vs. Korean). The results overall do not support the idea that rhythm classes have a basis in perception. They further show that the popular sasasa signal manipulation is not ecologically valid: results differed depending on whether additional prosodic information provided by F0 was present or not, suggesting that the timing information encoded in sasasa is not processed independently of the other components of prosody. Finally, the results of the second series of experiments strongly suggest that results interpreted as evidence for rhythm classes are most likely due to a confound between tempo and rhythm class.

2011-10-25

Rational imitation and categorization in a-adjective production

Jeremy Boyd

+ more

How do language learners acquire idiosyncratic constraints on the use of grammatical patterns? For example, how might one determine that members of the class of English a-adjectives cannot be used prenominally (e.g., ??The asleep/afloat/alive duck…, cf. The duck that's asleep/afloat/alive…)? In this talk I present evidence indicating (1) that learners infer constraints on the use of a-adjectives by evaluating distributional patterns in their input, (2) that the constraint against prenominal a-adjective usage is abstract and generalizes across members of the a-adjective class, and (3) that learners shrewdly evaluate the quality of their input, and in fact disregard uninformative input exemplars when deciding whether a grammatical constraint should be inferred. Moreover, the existence of similar types of reasoning in non-linguistic species suggests the presence of phylogenetically conserved mechanisms that, while not specific to language, can be used to arrive at conclusions about what forms are and are not preferred in grammar.

2011-10-18

The Gradient Production of Spanish-English Code-Switching

Page Piccinini

+ more

It is generally assumed that in code-switching (CS) switches between two languages are categorical, however, recent research suggests that the phonologies involved in CS are merged and bilinguals must actively suppress one language when encoding in the other. Thus, it was hypothesized that CS does not take place abruptly but that cues before the point of language change are also present. This hypothesis is tested with a corpus of Spanish-English CS examining word-initial voiceless stop VOT and the vowel in the discourse marker "like." Both English and Spanish VOTs at CS boundaries were shorter, or more "Spanish-like," than in comparable monolingual utterances. The vowel of "like" in English utterances was more monophthongal and had a lower final F2 as compared to "like" in Spanish utterances. At CS boundaries, "like" began similarly to the language preceding the token and ended similarly to the language following it. For example, in a "English-like-Spanish" utterance, initial formant measurements were more English-like but final measurements more Spanish-like. These results suggest code-switching boundaries are not categorical, but an area where phonologies of both languages affect productions.

2011-10-11

Do you see what I mean? Cognitive resources in speech-gesture integration

Seana Coulson

+ more

Often when people talk, they move their bodies, using their hands to indicate information about the shape, size, and spatial configuration of the objects and actions they're talking about. In this talk, I'll discuss a series of experiments in my lab that examined how gestural information affects discourse comprehension. We find that individuals differ greatly in their sensitivity to co-speech gestures, and suggest visuo-spatial working memory (WM) capacity as a major source of this variation. Sensitivity to speech-gesture congruity correlates positively with visuo-spatial WM capacity, and is greatest in individuals with high scores on tests of visuo-spatial WM, but low scores on tests of verbal ability. These data suggest an important role for visuo-spatial WM in speech-gesture integration as listeners use the information in gestures to help construct more visually specific situation models, i.e. cognitive models of the topic of discourse.

2011-10-04

Incremental lexical learning in speech production: a computation model and empirical evaluation

Gary Oppenheim

+ more

Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have claimed that the findings require particular mechanisms for lexical selection. Here I will present and evaluate a simple model of lexical retrieval in speech production that applies error-driven learning to its lexical activation network. This model naturally produces repetition priming and semantic interference effects. It predicts the major findings from several published experiments, and model analyses suggest that its effects arise from competition during the learning process, requiring few constraints on the means of lexical selection. New empirical work confirms a core assumption of the learning model by demonstrating that semantic interference persists indefinitely -- remaining detectable at least one hundred times longer than reported in any previous publication -- with no indication of time-based decay.

2011-05-31

L2 phonological learning as a process of inductive inference

Bozena Pajak

+ more

Traditional approaches to second language (L2) phonological learning assume that learners map L2 inputs onto existing category inventories available in the native language (L1). We propose a very different model in which the acquisition of novel phonological category inventories proceeds through general categorization processes, in which knowledge of L1 and other languages provides inductive biases. This approach views linguistic knowledge as hierarchically organized such that the outcome of acquisition of a language includes not only knowledge of the specific language in question, but also beliefs about how languages in general are likely to be structured. In this talk we present results of two experiments that test the predictions of the model regarding how two sources of information—distributional information from a novel L2 and inferences derived from existing language knowledge—combine to drive learning of L2 sound categories.

2011-05-24

What's in a Rise? Effects of Language Experience on Interpretation of Lexical Tone

Carolyn Quam

+ more

Models of sound-categorization and word learning must address how second-language learners apply existing sound categories to learn a new language and how/whether established bilinguals differentially attend to acoustic dimensions when processing each language. Here we consider interpretation of pitch, which in English conveys sentence-level intonational categories (e.g., yes/no questions) but in Mandarin contrasts words. We addressed three questions: How accurately is tone information exploited in on-line word recognition? Does this differ for familiar versus newly learned words? Does this differ depending on language experience? In our eye-tracking paradigm, Mandarin-English bilinguals and English monolinguals learned and were tested on novel words. Bilinguals also completed a familiar-word recognition task and two language-dominance/proficiency measures. For bilinguals recognizing familiar Mandarin words, eye-movements revealed that words differing minimally in their segments were recognized faster than words differing in their tones (t(46)=5.2, p<.001). However, this segments>tones difference weakened as Mandarin proficiency increased (r=-0.41, p<.005). Lower-proficiency bilinguals might have exploited tone less because of less experience with the words, so we asked whether newly learned words would also show effects of Mandarin proficiency/knowledge. Clicking responses revealed that monolinguals (t(10)=4.23, p<.005) and bilinguals (t(47)=8.15, p<.001) were less accurate with different-tone than with different-vowel words, regardless of bilinguals’ Mandarin proficiency. Our experiments suggest more difficulty exploiting tone than segments unless word familiarity and Mandarin proficiency are high. This provides a more nuanced view than previous studies (Malins & Joanisse, 2009; Wong & Perrachione, 2007) of the impact of language background on tone interpretation in word-learning and retrieval.

2011-05-17

Retrieving words requires attention

Dan Kleinman

+ more

Even though speaking usually feels like an automatic process, it isn't - at least, not entirely, as we know from studies showing that talking on a cell phone impairs driving performance. Which stages of language production require attentional resources, and which are automatic? In my talk, I will focus on this question with respect to lemma selection, the stage at which the word to be produced is selected from a speaker's lexicon.
Prior research has investigated this topic using dual-task experiments. Dell'Acqua et al. (2007) presented subjects on each trial with a tone and then, after some delay, a picture with a visually superimposed word (the picture-word interference task). Subjects categorized the pitch of the tone and then named the picture while ignoring the word, which was either semantically related or unrelated to the picture name. They found semantic interference at delays of 350 and 1000 ms but not 100 ms. In keeping with the logic of dual-task experiments, they concluded that lemma selection could co-occur with attention-demanding tone processing, suggesting that it could be performed automatically. This finding is surprising for two reasons: First, it localizes lemma selection to a stage of processing that typically consists of low-level perceptual processing. Second, prior research has shown that attention is required to resolve competition in the Stroop effect, to which picture-word interference is often compared.

2011-05-10

Neither hear nor there: a 'metrical restoration' effect in music

Sarah Creel

+ more

What happens in your mind when you hear music? That is, what memories become activated--Where you were when you first heard the song? The time you played it in middle school band? Other similar pieces of music? Recent work in my lab suggests that, much as with linguistic material, listeners hearing music activate detailed memory representations of previous hearings. In this talk I will outline a series of music perception experiments that ask what information gets activated when you hear melodies of varying familiarity. Specifically, I manipulate each listener's musical experience--for instance, listener 1 might hear a melody in Context A, and listener 2 might hear the same melody in Context B--where each context consists of instruments that play at the same time as the melody. I then present both listeners with the melody out-of-context, and probe for effects of the experienced context.
In an initial study (Creel, in press, JEPHPP), I found that listeners retained melody-specific memory for meter. That is, depending on which context they had heard initially, they thought the "beats" fell in different places in a particular melody. An even more interesting question is how memory representations of specific musical experiences might influence processing of other music you hear, such as a Beatles song or Vivaldi concerto you haven’t heard before. Ongoing work is exploring the circumstances under which melody-specific memory influences the processing of new melodies. These results not only imply that listeners activate musical information in a style-specific manner, but also suggest a mechanism by which musical styles might be learned. This approach is somewhat at odds with explanations of music perception that focus on surface cues alone, in that it suggests a strong, specific role for memory. I will also discuss the current work's implications for processing of metrical information in language.

2011-05-03

Tomorrow, uphill: Topography-based construals of time in an indigenous group of Papua New Guinea

Rafael Nunez & Kensy Cooperrider

+ more

Do humans everywhere share the same basic abstract concepts? Time, an everyday yet fundamentally abstract domain, is conceptualized in terms of space throughout the world’s cultures. Specifically, linguists and psychologists have presented evidence of a widespread pattern in which deictic time— past, present, and future— is construed according to a linear front/back axis. To investigate the universality of this pattern, we studied the construal of deictic time among the Yupno, an indigenous group from the mountains of Papua New Guinea, whose language makes extensive use of allocentric topographic (uphill/downhill)— but not egocentric (front/back)— terms for describing spatial relations. The pointing direction of their spontaneous co-speech temporal gestures— analyzed via spherical statistics and topographic information— provides evidence of a strikingly different pattern in their time concepts. Results show that the Yupno construe deictic time spatially in terms of allocentric topography: the past is construed as downhill, the present as co-located with the speaker, and the future as uphill. The Yupno construal reflects particulars of the local terrain, and, in contrast to all previous reports, is not organized in terms of opposite directions along a “time-line”. The findings have implications for our understanding of fundamental human abstract concepts, including the extent to which they vary and how they are shaped by language, culture, and environment.

2011-04-26

Uncertainty about Previous Words and the Role of Re-reading in Sentence Comprehension

Emily Morgan

+ more

Models of sentence comprehension and of eye-movements in reading have generally focused on the incremental processing of sentences as new words become available, but have paid less attention to the possibility of rereading a previous word. There is recent evidence, however, that downstream information can cause a comprehender to question their belief about a previous word. In this case, a reasonable strategy might be to gather more visual input about the previous word in light of this new information. I will present work in progress on a series of eye-tracking experiments investigating uncertainty in mental representations of visual input and the role of re-reading in sentence comprehension.

2011-04-12

ERP Investigations of Causal Inference Processing

Tristan Davenport

+ more

In this talk I report the results of two experiments investigating the effects of causal inference on word processing. EEG was recorded as subjects listened to short stories containing causal coherence gaps, each one followed by a visual probe word selected to index causal inferential or lexical associative processing. In experiment 1, we compare the influences of these two types of context on word processing and find that facilitation effects due to causal inference begin earlier and last longer than those attributed to lexical association. In experiment 2, the first of several planned variations using these materials, the probe words were presented in visual hemifield to assess hemispheric asymmetries in using lexical and inferential context. Results tentatively suggest a right-hemisphere basis for causal inference effects. Taken together, these results are consistent with models of top-down language processing, with different contextual variables weighted by how well they predict the current word. The results of experiment 2 additionally suggest a neural dissociation between these two aspects of language processing.

2011-04-05

Language, Structure, & Thought

David Barner

+ more

I will describe three approaches to studying the relationship between linguistic structure & thought: object perception, counting, and mental math. Together these studies argue that although language provides important structure for guiding inference when learning words & concepts, we do not use it to create qualitatively novel representations. Words act as windows to thought, selecting from among pre-existing representations, or recycling them for new purposes.

2011-03-29

Neurocognitive Indices for Event Comprehension

Hiromu Sakai Hiroshima University

+ more

Recognition of event type takes a highly significant part in sentence comprehension. In head-final languages, the predicates that play important roles in determining event type are processed at relatively late stages in the course of constructing semantic representation of sentences. This leads to interesting questions about when and how event comprehension is achieved in such languages. I conducted a series of behavioral and electro-physiological (event-related potential) experiments that address these issues. The results showed that aspectual mismatch of elements increased reading-time even before the parser encounters the predicates, and that the aspectual coercion elicited a left-frontal negativity associated with increased processing load. These findings suggest that event comprehension is carried out in incremental fashion in the course of constructing semantic representation of sentences even in head-final languages.

2011-03-08

Tutorial on hierarchical/mixed-effects models for data analysis

Roger Levy et al

+ more

Hierarchical (also called "multi-level", or sometimes "mixed-effects") probabilistic models are becoming quite popular in all sorts of quantitative work on language, and with good reason: they can capture cross-cutting sources of variability at multiple levels of granularity, allowing researchers great flexibility in drawing generalizations from data. In this tutorial I give a brief introduction to the use of hierarchical models in linguistic data analysis. First I briefly review generalized linear models. I then go on to give a precise description of hierarchical generalized linear models, and cover both (approximate) maximum-likelihood and Bayesian methods for drawing inferences for such models. I continue with coverage of the crucial issue of how to interpret model parameter estimates and conduct hypothesis tests. Finally, I briefly discuss some ongoing work (joint with Hal Tily) on systematic comparisons of different ways of using these models for data analysis, how they compare with traditional ANOVA analyses, and (hopefully) progress towards reliable standards for the use of hierarchical models in linguistic data analysis that reaps their benefits while avoiding potential pitfalls.

The tutorial will mix conceptual and mathematical treatment with concrete examples using both simulated and real datasets. R code for much of the material covered in the tutorial will be made publicly available, as well.

2011-03-01

Anticipation is making me look: Prediction, priming and computation in language understanding

Jim Magnuson University of Connecticut and Haskins Labs

+ more

Over the last several years, theories of human language processing have emerged that posit highly top-down architectures that actively and optimally forecast upcoming words. Simultaneously, there has been a resurgence of theories that assume modularity of initial processing and late integration of top-down information. I will describe two studies that address both trends. The first study uses eye tracking data to challenge optimality assumptions. Specifically, we find that some linguistic "anticipation" is less forward-looking that it appears in contemporary experimental paradigms. Much (though not all) anticipation may be explained by passive mechanisms like priming rather than optimal forecasting, greatly reducing the computational complexity that must be attributed to human language comprehension. The other study uses event related potentials (ERPs) to re-evaluate a central finding that motivates modularity assumptions in some theories, and reveals that a component argued to reflect encapsulated syntactic processing (the ELAN) is sensitive to anticipation based on nonlinguistic expectations. These seemingly contrary results are consistent with a variety of current theories that posit dual (active and passive) processing mechanisms, as well as dynamical systems approaches such as Tabor's self-organizing parser.

2011-02-22

In Search of Grammar

Carol Padden

+ more

In the course of working on a new sign language used by a community of Bedouins in the Negev (Al-Sayyid Bedouin Sign Language, or ABSL), we discovered a type of lexical pattern that seems to have no precise parallel in spoken language. When we surveyed older sign languages, we found that they too exhibited a preferential pattern, which we call the object vs. handling pattern. ASL signers favor the object pattern, in which the physical properties of the object such as the length of a toothbrush or the teeth of a comb are represented. Signers of New Zealand Sign Language, a dialect of British Sign Language, favor the handling pattern in which they show how the object is held by hand such as grasping a toothbrush or holding a comb. The bias is never entirely exclusive, but strongly preferential. In principle, the lexical items of a given language could be divided evenly between the two types because they are equally iconic, but signers of unrelated sign languages are surprisingly consistent in their preference for one or the other pattern.

The discovery of a structure that seems specific to sign languages calls into question the task of identifying grammatical properties of human languages. Do human languages share an underlying set of structures, beyond which there are structures that differ depending on modality? Or are languages assemblages of structures that emerge in time using resources (literally) at hand - in the case of sign languages, gestural resources? The existence of this lexicalization pattern in ABSL is provocative for understanding properties of grammars: handling and instrument forms are equally iconic, yet in a new sign language, preferential structure emerges early in its history, at least by the second generation.

2011-02-15

Behavioral and Electrophysiological Investigations into the Structure and Computation of Concrete Concepts

Ben Amsel

+ more

This talk addresses the computation and organization of conceptual knowledge. Specifically, I focus on the recruitment of concrete knowledge during single word reading, which I address with a number of behavioural and electrophysiological experiments. I'll present a study assessing how number of visual semantic features (listed by participants as being part of a given concept) influences both the speed of word meaning computation, and its neural underpinnings. I also assess the flexibility and timecourse of semantic knowledge activation as a function of specific task constraints using a series of behavioral studies and a single-trial ERP study. I argue that the results presented herein do not support pure unitary theories of semantic memory organization. I conclude that the dynamic timecourses, topographies, and feature activation profiles are most consistent with a flexible conceptual system, wherein dynamic recruitment of representations in modality-specific and supramodal cortex are a crucial element of word meaning computation in the brain.

2011-02-08

The Development of Representations of Polysemous Words

Mahesh Srinivasan

+ more

A primary function of the representation of the meaning of a word is to link word forms with concepts--this ensures that when we hear a word, we activate the relevant concept, and that when we wish to communicate about some concept, we use the appropriate word form. The meaning of a word must be phrased at the appropriate level of granularity--it must be general enough to encode what the different uses of a word have in common (e.g., a core meaning of run must be general enough to apply to the different cases of humans and animals running), but cannot be so general that it also applies to meanings that the word is not used for (e.g., the meaning of run should not also be applicable to a snake's movement).

The focus of this talk is on the representation of the meanings of polysemous words--e.g., the use of book to refer to an object (the gray book) or to the content it contains (the interesting book); the use of chicken to refer to an animal (the thirsty chicken) or to the meat derived from that animal (the tasty chicken). Because the different uses of polysemous words often cross ontological boundaries, single core representations that encode what the different uses have in common would be too vague to properly constrain how polysemous words are used. One alternative, which I refer to as the List Model of polysemy, is that each of the uses of a polysemous word may be separately listed in memory and linked to separate concepts. This approach, however, misses important generalizations with respect to how polysemous words are used--for instance, in addition to words like book, words like video and record can refer to the objects and to the abstract content they contain, and in addition to words like chicken, words like lamb and fish can refer to animals and to the meat derived from them.

A first set of studies explored 4 and 5-year-old children's representations of the polysemous meanings of words like chicken. These studies provided evidence that early in development, polysemous meanings are not represented as separate words but instead rely on generative structures: lexical or conceptual structures that encode the relations between polysemous meanings and permit the meanings of these words to shift. A second set of studies examined whether generative structures could facilitate children's acquisition of polysemous meanings, by constraining their hypotheses about how the meaning of a novel word can shift. These findings are discussed with respect to the implications they have for the representational basis of flexible language.

2011-02-01

WOMAN BOX PUSH, but *not* WOMAN BOY PUSH: How reversible events influence linguistic structure

Matt Hall

+ more

Human language structure is far from random, for two kinds of reasons: first, because we acquire language from input, and so learn the patterns that we were exposed to. But second, because we sometimes *fail* to learn or pass on the patterns in our input: instead, we gradually alter the system in systematic ways. Identifying these internal cognitive forces that compel language to take on particular forms has been one focus of my research.

In this talk, I argue that one of these forces is whether or not the patient of a transitive event could plausibly be the agent. (In other words, does semantics alone suffice for assigning thematic roles, or are other cues needed?) Using pantomime as a way to let participants sidestep the grammar of their native language, I show that despite a preference for SOV word order in non-reversible events (e.g. a woman pushing a box), participants actively avoid such descriptions of reversible events (e.g. a woman pushing a boy). I also show that some participants spontaneously invent proto-case marking for these cases. Taken together, the evidence suggests that while SOV may be preferred early in the development of a communicative system, the need to communicate about reversible events is its "Achilles heel", which contributes to the emergence of new linguistic structures.

2011-01-25

How does a grammatical category emerge? The case of sign language agreement verbs

Irit Meir University of Haifa

+ more

Grammatical categories (often referred to as 'functional categories') play an important role in various syntactic theories, yet their nature is often poorly understood or just taken for granted. It is not clear, for example, how many categories there are, whether there is a universal set of categories, and whether there are any constraints on possible categories.

In this talk I argue that one way of getting a better understanding of the nature of grammatical categories is by taking a diachronic perspective, that is, by examining how a grammatical category is "born". I will trace the development of the category of agreement verbs in Israeli Sign Language (ISL), a class of verbs that denote transfer and is marked by a specific grammatical inflection. By analyzing the different stages that gave rise to this system, I provide evidence for the following claims:

1. A grammatical category may arise not only via grammaticalization of free words, but also as a result of back formation and reanalysis. Therefore "today's morphology" is not always "yesterday's syntax".

2. A grammatical category may be modality-dependent. This constitutes a challenge existing theories, especially to theories assuming a universal set of categories, as they do not predict modality-dependent functional categories.

2011-01-18

Rhythm classes and the measuring of speech rhythm

Amalia Arvaniti

+ more

In the past decade, metrics that seek to measure durational variability in speech – such as the %V-ΔC of Ramus et al. (1999) or the PVIs of Grabe & Low (2002) – have been used to quantify the impressionistic division of languages into stress- and syllable-timing. Their initial success has bolstered the belief in rhythmic classes and has been used to support research on language acquisition and speech processing that relies on the idea of rhythmic classes as well. Yet research based on rhythm metrics is fraught with discrepancies which warrant further investigation. In this talk, I present results from production and perception that cast doubt on the validity of metrics as measures of rhythm and consequently on the idea of rhythm classes as a valid typological distinction.

In the production study, sentences, story reading and spontaneous speech were elicited from speakers of English, German, Spanish, Italian, Greek and Korean. The results show that metrics are less sensitive to cross-linguistic differences than to speaker-specific timing patterns, the elicitation task and the within-language variation reflected in the materials used. Overall, these results suggest that rhythmic classification based on measuring the durational variability of segmental intervals is difficult if not impossible to achieve with any consistency.

The perceptual results also show that classification based on the impression languages give (the original basis for rhythm classes) is equally difficult. Specifically, listeners heard either low-pass filtered sentences or sentences converted to "flat [sasasa]" – in which all vowel intervals are replaced by [a] and all consonantal intervals by [s] while F0 is flattened – and used a Likert scale to rate them for similarity to a series of non-speech trochees. It was hypothesized that stress-timed languages like English and German, the rhythm of which is said to be based on foot-initial prominences, would be rated more trochee-like than syllable-timed languages, such as Spanish or Italian, whose rhythm is said to be a cadence. The results provide no support that classification is driven by rhythm class, and indicate that the timing of consonantal and vocalic intervals is not processed independently of over prosodic cues (such as amplitude and F0).

Taken all together, these results strongly suggest that the classification into distinct rhythm classes cannot be achieved either on the basis of measuring particular timing characteristics of the speech signal or relying on the impression of rhythmicity languages give to listeners. These results cast doubt on the idea of rhythm classes and, consequently, on proposals about language acquisition and speech processing that rely on the categorization of languages along these lines. The reasons behind these results will be discussed, and proposals for an alternative view of speech rhythm and for protocols that can be used to investigate it experimentally will be presented.

2011-01-11

"The phonemic restoration effect reveals pre-N400 effect of supportive sentence context in speech perception"

David Groppe

+ more

The phonemic restoration effect refers to the tendency for people to
hallucinate a phoneme replaced by a non-speech sound (e.g., a tone) in
a word. This illusion can be influenced by preceding sentential
context providing information about the likelihood of the missing
phoneme. The saliency of the illusion suggests that supportive context
can affect relatively low (phonemic or lower) levels of speech
processing, which would be consistent with interactive theories speech
perception (McClelland & Elman, 1986; Mirman, McClelland, & Holt,
2006) and predictive theories of cortical processing (Friston, 2005;
Summerfield & Egner, 2009). Indeed, a previous event-related brain
potential (ERP) investigation of the phonemic restoration effect
(Sivonen, Maess, Lattner, & Friederici, 2006) found that the
processing of coughs replacing high versus low probability phonemes in
sentential words differed from each other as early as the auditory N1
(120-180 ms post-stimulus); this result, however, was confounded by
physical differences between the high and low probability speech
stimuli. Thus it could have been caused by factors such as habituation
and not by supportive context. We conducted a similar ERP experiment
avoiding this confound by using the same auditory stimuli preceded by
text that made critical phonemes more or less probable. We too found
the robust N400 effect of phoneme/word probability, but did not
observe the early N1 effect. We did however observe a left posterior
effect of phoneme/word probability around 192-224 ms. It is not yet
clear what level of processing (e.g., phonemic, lexical) produced this
effect, but the effect is clear evidence that supportive sentence
context can affect speech comprehension well in advance of the
lexical/post-lexical semantic processing indexed by the N400. While a
pre-N400 effect is supportive of interactive theories of speech
perception and predictive theories of cortical processing, it is
surprising that yet earlier effects weren't found if these theories
are indeed true.

This work was completed in collaboration with Marvin Choi, Tiffany
Huang, Joseph Schilz, Ben Topkins, Tom Urbach, and Marta Kutas.

2011-01-04

"Perceiving speech in context: Neural and behavioral evidence for continuous cue encoding and combination"

Joe Toscano Department of Psychology, University of Iowa

+ more

A classic problem in speech perception concerns the lack of a one-to-one, invariant mapping between acoustic cues in the sound signal and phonological and lexical-level representations. A great deal of this variability is due to different types of context effects, such as variation in speaking rate, differences between talkers¹ voices, and coarticulation from surrounding segments. Within each of these domains, a number of specialized solutions have been proposed. Here, I argue that general cue-integration principles may be sufficient for explaining context effects. Crucially, these principles can be implemented as relatively simple combinations of continuous cues, allowing listeners to integrate multiple, redundant sources of information and factor out predictable variation. This approach suggests that listeners encode acoustic cues independently of phonological categories and that techniques used to describe how they combine multiple cues may also apply to certain context effects. To assess these predictions, I present work using a recently developed ERP technique that allows us to examine cue encoding and phonological categorization, as well as experiments looking at listeners' use of multiple cues in a visual world eye-tracking task. In addition, I describe work extending this cue-integration approach to examine effects in spoken word recognition and experiments looking at whether context effects occur at the level of encoding or categorization. Together, these results suggest that general mechanisms of cue-integration may be much more powerful for handling variability in speech than previously thought.

2010-11-30

Investigating the Time Course of Accented Speech Perception

Melanie Tumlin

+ more

While a fair amount of research has explored how adult listeners perceptually accommodate to accented speech, significantly less is known about how accent information is integrated in on-line processing, and about how children handle this type of variability in spoken language. I will discuss a proposed set of experiments aimed at addressing these questions using eye-tracking.

2010-11-23

Cumulative processing factors in negative island contexts

Simone Gieselman

+ more

Why is a particular sentence perceived as unacceptable? Is it because it violates a global grammatical constraint or because different factors conspire in a cumulative fashion to produce this effect? This question is especially pertinent within the linguistic literature on so-called island phenomena. In this talk, we focus on a particular kind of island created by negation (e.g. "How fast didn’t the intern complete the project?" – N.B. the sentence is fine without negation). We show that by using acceptability rating as a measure, we are able to isolate factors (negation, extraction, and referentiality) that account for gradience in the acceptability judgment data and eventually lead to unacceptability. There is a large literature on the difficulty of processing _negation_. Though the reason for this cost is still subject to debate, what is uncontroversial is that an appropriate discourse context is important. _Extraction_ has been associated with working memory load, and negative islands are contexts in which the processing of negation and working memory costs interact. _Referentiality_ has been primarily discussed in the syntactic literature on extraction, but is a fundamentally semantic notion. Our results show interaction effects among all three factors, which may be evidence that they draw on interrelated cognitive resources. We also report results comparing the effects of negation and of also, a so-called presupposition trigger that imposes conditions on the discourse. The presupposition trigger and negation seem to interact with extraction in a very similar fashion, indicating that it is indeed the costs created by the discourse conditions of negation that interact with extraction.

2010-11-16

Neural substrates of rhythm, timing, and speech comprehension

Sonja Kotz Max Planck Institute for Human Cognitive and Brain Sciences (Leipzig)

+ more

Cortical neural correlates of linguistic functions are well documented in the neuroscience and the neuropsychological literature. However, the influence of non-linguistic functions such as rhythm and timing are still understudied in speech comprehension (see Kotz & Schwartze, 2010). This is surprising as rhythm and timing play a critical role in learning, can compensate acquired and developmental speech and language disorders, and further our understanding of subcortical contributions to linguistic and non-linguistic functions. For example, recent neuroimaging and clinical evidence has confirmed the contributions of classical motor control areas (cerebellum (CE), basal ganglia (BG), supplementary motor area (SMA)) in rhythm, timing, music, and speech perception (Chen et al., 2008; Grahn et al., 2007; Geiser et al., 2009; Kotz et al., 2005; 2009). We consider serial order and temporal precision to be the mechanisms that are shared in simple and complex motor behaviour (e.g. Salinas, 2009) and speech comprehension (Kotz et al., 2009). Here we investigate with event-related brain potentials (ERPs) and functional magnetic resonance imaging (fMRI) (1) how syntax, adhering to serial and hierarchical order, and rhythm, organizing the temporal unfolding of utterances in speech, interact, and (2) how classical motor areas interface with supposed specialized areas in the perisylvian speech comprehension network. Our results reveal an interaction of syntax and rhythm in the P600 ERP component that is linked to sentential integration processes (Schmidt-Kassow & Kotz, 2009), a facilitatory effect of rhythmic regularity in classical perisylvian speech areas such as the superior temporal gyrus/sulcus (STG/STS), and the recruitment of classical motor areas (preSMA, lateral premotor cortex, BG, and CE) highlighting the impact of rhythm on syntax in speech comprehension.

2010-11-09

Look before you speak: The impact of perceptual, lexical, and conceptual accessibility on word ordering

Stefanie Kuchinsky Medical University of South Carolina

+ more

Given the amount of visual information in a scene, how do speakers determine what to talk about first? One hypothesis is that speakers start talking about what has attentional priority, while another is that speakers first extract the scene gist, using the obtained relational information to generate a rudimentary sentence plan before retrieving individual words. I will present experiments which evaluate these views by examining the conditions under which different types of information may be relevant for production. I do so by employing a modified version of Gleitman, January, Nappa, and Trueswell’s (2007) attentional cuing paradigm in which participants were found to be more likely to begin picture descriptions with a particular actor if their attention had been drawn to it. I examine the extent to which these effects are modulated by the amount of time given to extract the scene gist and by the ease of identifying the pictured event and actor names. I suggest that perceptual factors influence word ordering only when conceptual information is not immediately available or insufficient for generating an utterance framework.

2010-11-02

Nouns, Verbs, Arguments and Iconicity in an Emerging Sign Language

John Haviland

+ more

Zinacantec Family Homesign (ZFHS) is a new sign language developed in a single household in highland Chiapas, Mexico, where the deaf signers are surrounded by speakers of Tzotzil (Mayan). Such a new language and its highly iconic sign vehicles challenge easy assignment of such foundational linguistic elements as ‘part-of-speech’ categories and concomitant analysis of clause structure, especially syntactic expression of verbs and their arguments.

2010-10-26

Toward organizing principles for brain computation of language and cognition

Ned Sahin

+ more

The brain basis of human cognition is often described in terms of functional *regions*, largely because of the “dumb luck” that one of the relevant dimensions for brain organization is spatial: Anatomical regions like Broca’s and Wernicke’s areas have measurably distinct properties, and anatomically restricted injuries like strokes (and more recently anatomical imaging like fMRI) make it convenient to characterize them. However, the spatial dimension is of course not the only organizing principle of the brain. As just one example, I recently probed within Broca’s area, and found that within a single sub-region there were three distinct neural processing stages for three linguistically orthogonal aspects of word production: meaning, structure and sound form (peak activity at ~200, 320, and 450 ms) (Science, 2009). [This was enabled by the wonderful privilege to record intra-cranial electrophysiology (ICE) from electrodes implanted in the awake behaving human brain to guide surgery.] Multiplexing in the time dimension is therefore a necessary organizing principle for language processing, however it is not sufficient. For instance, in a separate ICE data set, I found that language-related brain circuits at a given location and time oscillated at multiple frequencies, and these oscillatory bands had distinct temporal dynamics, correlated with distinct linguistic information, and indicated distinct physiological processes (e.g. cell firing vs. EPSPs). This allowed for a rudimentary process flow diagram of word production, from early visual input (~60ms) to articulatory output (~600ms and beyond) with multiple serial and parallel stages. However, even though the combination of spatial, temporal, frequency, and physiological dimensions may get us a little further toward the organizing principles of *individual* computational entities, there remains a larger challenge, namely in understanding how they work *together*. As an analogy, consider a team of specialists recruited for a complex project. The efficiency they offer is lost unless you can a.) divvy parts of the project to the correct specialist, and then crucially b.) reassemble their individual output into a single solution. I will discuss one very recent ICE result that might suggest a possible organizing principle for how the brain addresses this challenge. Task-activated cell populations resonated in sync (phase-locked) with other populations near and far, in three distinct waves, consistent with the following model. The early wave readies the entire cortical system (from visual to motor) and divvies up the task among specialized circuits. During the middle wave, the actual linguistic processing takes place within the individual entities. In the final wave (around the time of the utterance), results from the specialized processing are brought together into a single holistic representation for output (e.g. a single grammatically-inflected word). These various candidate dimensions and principles will be discussed in terms of future directions.

2010-10-19

Quantifiers more or less quantify on-line: ERP evidence for partial incremental interpretation

Tom Urbach

+ more

There is plenty of evidence that people can construct rich representations of sentence meaning essentially word by word (incremental interpretation). But there is also clear evidence of systematic shallow (partial, underspecified, good enough) interpretation. What is the holiday where kids dress up in costumes and go door to door giving out candy?. There isn't one, though on Halloween kids typically get candy (Reder & Kusbit, 1991). If a strong general principle of incremental interpretation with full, immediate semantic interpretation is untenable, it is unclear what principle(s) governing the speed and depth of interpretation should replace it. One way forward is to learn more about special cases that may prove diagnostic.

In this talk I will present results from three experiments that used plausibility judgments and ERPs in combination to track the time course of interpretation of quantified noun phrases (most farmers, a small number of engineers) and adverbs of quantification (often,rarely) in isolated sentences. The designs cross these quantifier types with general knowledge as in, [Most/few] farmers grow [crops/worms]. In post-sentence plausibility judgments we observed a cross-over interaction in which, crucially, "Few farmers grow worms" was rated more plausible than "Few farmers grow crops". We also found that quantifiers modulated N400 amplitude in the expected direction at the critical object noun (crops/worms) but this effect fell well short of the cross-over interaction observed for the plausibility judgments.

Together (and only together) these results suggest that the comprehension system does register the meaning of quantifier expressions to at least some degree as they are initially encountered (incrementally) but that the full semantic interpretations in evidence at the plausibility judgments don't emerge until later.

http://dx.doi.org/10.1016/j.jml.2010.03.008

2010-10-05

Understanding language evolution through iterated learning: some ongoing experimental work

Simon Kirby School of Philosophy, Psychology and Language Sciences, University of Edinburgh

+ more

Language is not only a learned behaviour, it is also one that persists over time by learners learning from the behaviour of other learners. The implications of this process of "iterated learning" are only now beginning to be understood in general terms. Although it obviously underpins the phenomenon of language change, work over the last few years has shown it also drives the emergence of language structure itself and is therefore an integral part of the story of language evolution.

In this informal talk, I will present some of our ongoing, as yet unpublished, work attempting to understand how iterated learning leads to the emergence of language structure by recreating the transmission process in miniature in the experiment lab. My aim will be to provoke discussion of the promises, limitations, and implications of iterated learning experiments and (hopefully!) gather suggestions of what we should look at next.
The talk will follow-on from my Cognitive Science talk the previous day, but I will start with a brief recap for those of you who are unable to attend both.

2010-06-01

Automatic lexico-semantic activation across languages: Evidence from the masked priming paradigm

Maria Dimitropoulou Basque Center on Cognition, Brain and Language

+ more

The present work is aimed at examining the extent to which a bilingual individual automatically accesses the representations in one of the known languages independently from the other, and whether such instances of automatic cross-language activation are modulated by the level of proficiency in the non-dominant language. Our findings provide evidence for the existence of strong cross-language interactions and pose certain constraints on the predictions made by models of bilingual lexico-semantic organization.

 

Children's Sensitivity to Pitch Variation in Language

Carolyn Quam

+ more

Children acquire consonant and vowel categories by 12 months, but appear to take much longer to learn to interpret perceptible acoustic variation. Here, we consider children's interpretation of pitch variation. Pitch operates, often simultaneously, at different levels of linguistic structure. English-learning children must disregard pitch at the lexical level--since English is not a tone language--while still attending to pitch for its other functions. Study 1 shows that 2.5-year-old English learners know pitch cannot differentiate words in English. Study 2 finds that not until age 4–5 do children correctly interpret pitch cues to emotions. Study 3 demonstrates some improvement between 2.5 and 5 years in exploiting the pitch cue to lexical stress, but continuing difficulties at the older ages. These findings suggest a late trajectory for interpretation of prosodic variation; we suggest potential explanations for this protracted time-course.

 

Why Do People Gesture? Clues from Individual Differences in Cognitive Skills

Martha Alibali University of Wisconsin

+ more

Why do speakers gesture? Some clues to this puzzle may be obtained by considering individual differences in gesture behavior. In this talk, I present two strands of research on individual differences in gesture rates and gesture-speech integration. The first strand addresses variations in gesture as a function of language ability. My collaborators and I have addressed this issue by comparing gesture rates and gesture-speech integration in children with specific language impairment and children with typical development. The second strand addresses variations in gesture as a function of patterns of cognitive skills in adults. We have addressed this issue by comparing gesture rates and gesture-speech integration in individuals with different patterns of strengths in verbal and spatial skills. The findings have implications for theories of the source and functions of gesture.

 

Considering the source: preschool children and adults use speaker-related acoustic variability to predict upcoming referents

Sarah Creel UC San Diego

+ more

Children sometimes have difficulty integrating cues to meaning in spoken language processing. One type of information that is helpful to adults listeners is knowing who is talking, but we do not yet know whether children can utilize this information on-line, and, if they can, how they use it. Three experiments suggest that children can successfully and rapidly utilize information about the person speaking based on acoustic cues. Children who heard a character request a shape (Can you help me find the square?) showed different visual fixation patterns depending on the favorite color of the person talking. They were able to do this not only for gender-stereotyped preferences (pink vs. blue), but for newly learned preferences (black vs. white). Children generalized from two characters to two new characters based on gender. Their only failure was in learning color preferences for two similar characters (same gender and age), though adults did so readily. I will discuss why children and adults differ. Further, performance seems to be based on representations of the talkers as people, rather than being based on low‐level associations between colors and voice qualities.

2010-04-20

Is the 'gl' of 'glimmer', 'gleam', and 'glow' meaningful? Frequency, sound symbolism, and the mental representation of phonaesthemes

Benjamin Bergen UC San Diego

+ more

For the most part, the sounds of words in a language are arbitrary, given their meanings. But there are exceptions. In fact, there are two ways in which words can be non-arbitrary. For one, there can be external reasons why a particular form would go with a given meaning, such as sound symbolism. Second, there are systematicities in languages, where words with similar forms are more likely than chance to have similar meanings. Such systematic form-meaning pairings, as observed in 'gleam', 'glow', and 'glimmer', are known as phonaesthemes. But are these systematicities psychologically real, or do they merely distributional relics of language change? In this talk, I'll describe some experimental work showing that these systematic form-meaning pairings are more than distributional facts about a lexicon - they also reflect organizational characteristics of the mental representation of words, their meanings, and their parts. I'll describe a priming methodology used to test what it is that leads phonaesthemes to be mentally represented, measuring effects of frequency, cue validity, and sound symbolism.

2010-04-13

Behavioral and neural measures of comprehension validation

Murray Singer University of Manitoba

+ more

It is proposed that memory-based processes permit the reader to continually monitor the congruence of the current text with its antecedents. Behavioral measures have indicated that these verification processes are influenced by factors including truth, sentence polarity, and discourse pragmatics. I will present converging ERP data that suggest hypotheses concerning stages of text integration within 1 second of processing.

2010-04-06

Scalar Implicatures in American Sign Language

Kate Davidson

+ more

Recently there has been a large body of research, both experimental and theoretical, on 'scalar implicatures,' the name given to the inference in (1b) that is made by a listener when a speaker utters (1a).
(1a) Speaker Says: Some of the cookies are on the table.
(1b) Hearer Infers: Not all of the cookies are on the table.
Theoretical debate focuses primarily on whether the inference in (1b) is due to non-linguistic pragmatic reasoning about the meaning of the sentence in (1a), or due to grammatical mechanisms that include the information in (1b) as part of the compositional semantic meaning of (1a). Because the type of inference in (1) happens in a wide variety of lexical domains and stands at the interface between the linguistic content and the surrounding social context, experiments have been conducted on the timing, acquisition, and effect of context on scalar implicatures in various spoken languages, though not in a sign language.

I will be presenting the results of one completed and two ongoing behavioral experiments which investigate scalar implicatures in American Sign Language from comparative and developmental perspectives using a new video/computer felicity judgement paradigm. Comparisons between ASL and English show that while differences in one scalar domain (coordination) do not affect scalar implicature calculations, differences in another (spatial encoding in classifiers in ASL vs. non-spatial description in English) do have effects on interpretation. This work also sets a baseline comparison for data I present that test later L1 learners of ASL, who are without general cognitive impairments but often show subtle linguistic deficits due to lack of early linguistic input, and thus can help address the issue of linguistic vs. social knowledge required for scalar implicatures.

2010-03-30

Recreating Duality of Patterning in the Lab: A new experimental paradigm for studying the emergence of sub-lexical structure

Alex Del Giudice (in collaboration with Simon Kirby & Carol Padden)

+ more

I will present results of 3 pilot experiments in a paradigm that explores the development of sub-lexical structure. In this paradigm, human participants learn a lexicon of visual symbols produced with a digitizing stylus such that the mapping from the stylus to the screen is restricted, minimizing the use of orthographic characters or pictographs. Each participant learns and recreates the set of symbols, and these recreations are transmitted to the next participant in a diffusion chain through a process of iterated learning. The iterated learning paradigm allows us to observe evolution of a "cultural" behavior such that no single participant is the driver of innovation and selection; instead the behavior is cumulatively developed across individuals.

We observe the transition of the lexicon from a relatively idiosyncratic set of abstract forms and form-meaning pairs to a set of symbols that show compelling evidence for sub-lexical structure, independent of meaning. As the language changes (an inescapable result of several factors), items in the lexicon of symbols begin to converge until a series of generations appears to analyze symbols as containing discreet sub-elements. This analysis leads to such sub-elements pervading the lexicon. These sub-elements are comparable to phonological units of spoken and signed languages.

2010-03-16

The Physiology of Lateralization: Reviewing Why Brain Size REALLY Matters

Ben Cipollini

+ more

A common method for studying the cerebral cortex is to associate a cognitive function with a location in the cortex. Such associations allow us to use imaging data to make educated guesses about what cognitive functions are involved in a task, and to use physiological data to suggest relationships between anatomically related functions.

Lateralization is a special case of this method. By identifying a function as "dominant" in one hemisphere, we can attempt to relate it to other functions associated with the same hemisphere, to contrast it with functions associated with the opposite hemisphere, or to relate it to anatomical differences between the hemispheres.

This talk will review data and theory in experimental and theoretical neuroscience to motivate the method of associating function with location. From there, further data will be reviewed to highlight
important caveats in our current understanding of the physiology of lateralization.

Theory of mammalian brain scaling, physiology of lateralization, and a specific focus on what we do (and do not) know about the corpus callosum will be discussed, with the goal to paint a coherent picture of what we know about the physiology of lateralization and how we can interpret experimental results within the limits of that knowledge.

2010-03-09

Stress Matters: Effects of Anticipated Lexical Stress on Silent Reading

Charles Clifton, Jr. University of Massachusetts, Amherst

+ more

I will present findings from two eye-tracking studies designed to investigate the role of metrical prosody in silent reading. In Experiment 1, subjects read stress-alternating noun-verb homographs (e.g. PREsent, preSENT) embedded in limericks, such that the lexical stress of the homograph, as determined by context, either matched or mismatched the metrical pattern of the limerick. The results demonstrated a reading cost when readers encountered a mismatch between the predicted and actual stress pattern of the word. Experiment 2 demonstrated a similar cost of a mismatch in stress patterns in a context where the metrical constraint was mediated by lexical category rather than by explicit meter. Both experiments demonstrated that readers are slower to read words when their stress pattern does not conform to expectations. The data from these two eye-tracking experiments provide some of the first on-line evidence that metrical information is part of the default representation of a word during silent reading and plays a role in controlling eye movements.

2010-03-02

The Physiology of Lateralization: Reviewing Why Brain Size REALLY Matters

Ben Cipollini

+ more

A common method for studying the cerebral cortex is to associate a cognitive function with a location in the cortex. Such associations allow us to use imaging data to make educated guesses about what cognitive functions are involved in a task, and to use physiological data to suggest relationships between anatomically related functions.

Lateralization is a special case of this method. By identifying a function as "dominant" in one hemisphere, we can attempt to relate it to other functions associated with the same hemisphere, to contrast it with functions associated with the opposite hemisphere, or to relate it to anatomical differences between the hemispheres.

This talk will review data and theory in experimental and theoretical neuroscience to motivate the method of associating function with location. From there, further data will be reviewed to highlight
important caveats in our current understanding of the physiology of lateralization.

Theory of mammalian brain scaling, physiology of lateralization, and a specific focus on what we do (and do not) know about the corpus callosum will be discussed, with the goal to paint a coherent picture of what we know about the physiology of lateralization and how we can interpret experimental results within the limits of that knowledge.

2010-02-23

Early experience with language really matters: Links between maternal talk, processing efficiency, and vocabulary growth in diverse groups of children

Anne Fernald (Stanford)

+ more

Research on the early development of cognition and language has focused primarily on infants from middle-class families, excluding children from less advantaged circumstances. Why does this matter? Because SES differences are robustly associated with the quantity and quality of early cognitive stimulation available to infants, and early cognitive stimulation really does matter. Longitudinal research on the development of fluency in language understanding reveals relations between processing speed in infancy and long-term outcomes, in both high-SES English-learning children and low-SES Spanish-learning children. But by 18 months, we find that low-SES children are already substantially slower in processing speed as well as vocabulary growth. It turns out that differences in early experience with language contribute to the variability observed in children’s efficiency in real-time processing. Within low-SES families, those children whose mothers talked with them more learned vocabulary more quickly – and they also made more rapid gains in processing speed. By examining variability both within and between groups of children who differ in early experience with language, we gained insight into common developmental trajectories of lexical growth in relation to increasing processing efficiency, and discovered environmental factors that may enable some children to progress more rapidly than others.

2010-02-16

Tips of the slongue: Using speech errors as a measure of learning

Jill Warker

+ more

Adults can learn new artificial phonotactic constraints (e.g., /f/ only occurs at the beginning of words) by producing syllables that contain those constraints. This learning is reflected in their speech errors. However, how quickly evidence of learning appears in errors depends on the type of constraint. Second-order constraints in which the placement of a consonant depends on another characteristic of the syllable (e.g., /f/ occurs at the beginning of words if the vowel is /I/) require a longer learning period. I will present a series of experiments using speech errors as an implicit measure of learning that investigate the characteristics underlying second-order phonotactic learning, such as whether there are limits on what types of dependencies can be learned, whether consolidation plays a role in learning, and how long the learning lasts.

2010-02-09

Default units in language acquisition

David Barner

+ more

When asked to “find three forks” adult speakers of English use the noun “fork” to identify units for counting. However, when number words (e.g., three) and quantifiers (e.g., more, every) are used with unfamiliar words (“Give me three blickets”) noun-specific conceptual criteria are unavailable for picking out units. This poses a problem for young children learning language, who begin to use quantifiers and number words by age two, despite knowing a relatively small number of nouns. Without knowing how individual nouns pick out units of quantification – e.g., what counts as a blicket – how could children decide whether there are three blickets or four? Three experiments suggest that children might solve this problem by assigning “default units” of quantification to number words, quantifiers, and number morphology. When shown objects broken into arbitrary pieces, 4-year-olds in Experiment 1 treated pieces as units when counting, interpreting quantifiers, and when using singular-plural morphology. Experiment 2 found that although children treat object-hood as sufficient for quantification, it is not necessary. Also sufficient for individuation are the criteria provided by known nouns. When two nameable things were glued together (e.g., two cups), children counted the glued things as two. However, when two arbitrary pieces of an object were put together (e.g., two parts of a ball), children counted them as one, even if they had previously counted the pieces as two. Experiment 3 found that when the pieces of broken things were nameable (e.g., wheels of a bicycle) 4-year-olds did not include them in counts of whole objects (e.g., bicycles). We discuss the role of default units in early language acquisition, their origin in acquisition, and how children eventually acquire an adult semantics identifying units of quantification.

2010-02-02

(Eye)tracking multiple worlds

Gerry Altmann University of York, UK

+ more

The world about us changes at an extraordinary pace. If language is to have any influence on what we attend to, that influence has to be exerted at a pace that can keep up. In this talk I shall focus on two aspects of this requirement: The speed with which language can mediate visual attention, and the fact that the cognitive system can very efficiently make up for the fact that, to be expedient (i.e. to keep up with the changing world) we do not in fact refer to all the changes that are associated with, or entailed, by an event. Rather, we infer aspects of those changes. One example of this is through elaborative inference, and another is through the manner in which we track (often unstated) changes in the states of objects as those objects undergo change. The talk will conclude with data suggesting that multiple representations of the same object in different event-dependent states may compete with one another, and that this competitive process may bring both costs and benefits.

2010-01-26

Are grammatical constructions meaningful? What mouse-tracking tells us.

Benjamin Bergen

+ more

All languages display systematic patterns of grammar. These "grammatical constructions" serve to organize words. But on some theoretical accounts (e.g. Goldberg, 1995) they do more than this - they also contribute to the meaning of the utterances they occur in. For instance, the English prepositional dative (1) and double-object constructions (2) have been argued to encode slightly different meanings; the dative ostensibly encodes motion along a path, while the double-object construction encodes transfer of possession (Langacker, 1987).

1. I'm sending the book to my brother. 2. I'm sending my brother the book.

I'll report on two studies that experimentally investigate the effects that hearing a sentence with one construction or another has on language comprehenders. Both studies use mouse-tracking to measure physical responses that comprehenders make subsequent to sentence processing. The first addresses the purported differences between the prepositional dative and double-object constructions, and the second compares active and passive constructions. In both, we find that the grammatical construction used affects how comprehenders subsequently move their bodies, which suggests that constructions may contribute to the process of meaning construction.

2010-01-19

Modeling OCP-Place with the Maximum Entropy Phonotactic Learner

Rebecca Colavin

+ more

Modeling speaker judgments has been marked recently by the advent of models that assume distinctive features and natural classes as the representational elements of phonotactic processing. We investigate the performance of one such model, the Hayes and Wilson (2008) Maximum Entropy (MaxEnt) Phonotactic Learner, and show that the model fails to make the generalizations necessary to predict speaker judgments for a language where a complex constraint is active and furthermore, that in some cases, the relationship between gradient speaker judgments and the statistics of the lexicon is not transparent.

Hayes & Wilson’s learner defines a set of natural classes based on distinctive feature and learns a set of weighted phonotactic constraints by iterating between (i) weighting an existing set of constraints according to the principle of Maximum Entropy, and (ii) adding new constraints based on their Observed/Expected (O/E) ratios given the current constraint set, starting with low ratios and moving incrementally higher.

We tested the MaxEnt learner on data from Amharic, a Semitic language. Like other Semitic languages Amharic verb roots show OCP violations for place of articulation (Bender & Fulass 1978, Rose & King 2007). Homorganic consonants occur less often in a verb root than expected if they co-occurred freely (Greenberg 1950, McCarthy 1994, Buckley 1997, Frisch, Pierrehumbert & Broe 2004). OCP-Place in a Semitic language poses two distinct challenges. (1) Constraint length. OCP-Place restrictions span up to three consonants. (2) Gradiency. OCP-Place restrictions in Semitic languages are stronger in some word positions and for some places of articulation than others.We trained the MaxEnt learner on a corpus of 4242 Amharic verb roots drawn from Kane (1990), and compared the learner’s performance to the judgments of nonce verb roots. Judgment data were collected from 20 native Amharic speakers, who were asked to rate the acceptibility of 270 nonce verb roots, balanced for presence/absence of constraint violation, observed/expected ratio, transitional probability, expected probability, and density. 90 nonce roots contained OCP violations. The design was similar to that for Arabic in Frisch & Zawaydeh (2001) and the results showed that speakers assigned lower ratings to nonce forms with OCP violations.

We investigated the claim in Hayes and Wilson (2008) that grammars that achieve greatest explanatory coverage (as measured by assigning a high log-likelihood to the lexicon) are also those that best predict speaker judgments of nonce forms. We evaluated automatically learned grammars of many different sizes as well as a hand-written grammar whose constraints were chosen from those available to the automatic learner so as to embody OCP-Place restrictions on the co-occurrence of similar and identical consonants within a verb root. The constraints of the hand-written grammar were assigned weights via MaxEnt. The predictions of each model were compared to the Amharic native speaker judgments and the (cross-validated) log-likelihood they assigned to the learning data.

The correlation between the predictions of the hand-written grammar was higher than for the best learned grammar (r = 0.47 and r = 0.34 respectively). However, the grammars that best predicted speaker judgments were not those with the highest log-likelihood; the correlations between speaker judgments and model predictions peaked with grammars of medium size while log-likelihood continued to grow substantially before leveling off.

Regarding the difference in performance between the hand-written and automatically learner grammars, our results indicate that the MaxEnt learner seems to show a stronger bias toward selecting constraints that involve aggressive generalization than speaker-judgment data suggest. For a given level of accuracy (Observed/Expected ratio), the learner's generalization heuristic selects short constraints over longer ones. A majority of the constraints that are acquired first span only one or two segments and capture statistical regularities of the lexicon other than OCP-Place. As the model proceeds towards longer constraints (such as the OCP-Place constraints that constitute the hand-written grammar), OCP-Place restrictions are weakened by the effect of the previously learned non-OCP restrictions and are less likely to be selected. Crucially, this suggests that to model phonotactic acquisition, constraint learning must allow either direct acquisition of high level generalizations such as those recognized by generative phonology, or some mechanism whereby constraints learned early can be eliminated from the grammar if a more general, albeit longer, constraint is found. Finally, the misalignment between model predictiveness and the log-likelihood of the learning data suggests that there are still open questions regarding the nature of the relationship between the statistics of the lexicon and speaker judgments.

2010-01-12

Fixation durations in first-pass reading reflect uncertainty about word identity

Nathaniel Smith

+ more

Many psycholinguistic properties believed to affect reading time, like word frequency or predictability, are dependent on the identity of the word being read. But due to sensory and other forms of noise, we would not expect the processor to have perfect information about the identity of the word to be processed -- especially during early stages of processing. We construct a simple Bayesian model of visual uncertainty during reading, and show that, in at least some cases, the processor marginalizes over possible words to produce a "best guess" of the predictability of the word being read.

2009-12-01

Can lexical selection explain the consequences of bilingualism?

Elin Runnqvist University of Barcelona and University Pompeu Fabra

+ more

Recent research has revealed both positive (better performance in some tasks requiring executive control) and negative (slower naming latencies in picture naming tasks) consequences of bilingualism. A common explanation for these phenomena is related to an inhibitory account of the process of lexical selection: in order to achieve successful selection of the lexical representations in the intended language, the activation of those representations corresponding to the other language needs to be suppressed. This constant use of inhibitory control could explain why bilinguals outperform monolinguals in tasks requiring executive control and also why they are slower in selecting the correct word during speech production. However, while there is evidence for activation of both languages of a bilingual in the process of speech, it is not clear that the activity of the non-target language interferes with the communicative goal during lexical selection. The aim of the studies presented in this talk was to test the inhibitory account of bilingual lexical selection. Our results are easiest explained within a model that does not make use of inhibition, suggesting that lexical selection is not the cause of the advantages and disadvantages of bilingualism.

2009-11-24

Language-specific and universal patterns of accentuation and narrow focus marking in Romani

Amalia Arvaniti

+ more

In this talk I present a first sketch of the intonation and rich accentuation and focus marking devices of Komotini Romani, a variety of Vlach Romani spoken in Thrace, the northeast region of Greece. The analysis is based on data from spontaneous conversations and story-telling involving several Romani speakers. These data show that Komotini Romani uses two cross-linguistically unusual features to mark focus. First, focus can be indicated by a non-metrically motivated stress-shift. Second, changes in accentuation are frequently co-produced with word order changes, the focus particle da (borrowed from Turkish), or stress-shift, while several of these devices can be used concurrently on occasion. These data show that focus marking strategies additional to those already known may be available cross-linguistically, such as the stress-shift of Romani. In addition, Romani can be added to the small number of languages that have a large repertoire of focus markers and tend to use them concurrently. In this respect, these data argue against a strong interpretation of the “minimality condition” recently proposed by Skopeteas & Fanselow (to appear) regarding focus marking strategies, according to which less complex structures are preferred to more complex ones (if both available in a given language) following a markedness scale from lightest to most structurally complex: in situ (prosody) < reordering < cleft. Komotini Romani clearly does not follow this scale, marking focus both prosodically and syntactically (or morphologically) on most occasions. Nevertheless, clefting is indeed extremely rare in this variety. I argue that this is because the possibilities afforded Romani by the combination of prosodic devices and word-order changes make clefting unnecessary, thus indirectly validating the scale of Skopeteas & Fanselow.

2009-11-17

ERP Studies of Sarcasm Comprehension

Seana Coulson

+ more

Calvin: Moe. Give me my truck back. It's not yours.
Moe: It is now. You gave it to me.
Calvin: I didn't have much choice did I?! It was either give up the truck or get punched!
Moe: So?
Calvin: So I only "gave" it to you because you're bigger and meaner than me!
Moe: Yeah? So?
Calvin: The forensic marvel has reduced my logic to shambles.
Moe: You're saying you changed your mind about getting punched?
--Bill Watterson

Calvin's last utterance in this exchange is an example of discourse irony, a genre of speech in which the content of the meta-message contrasts with that of the message. In this talk, I will sketch an account of the meaning construction operations involved in sarcasm, and consider its compatibility with the cognitive neuroscience literature on the comprehension of sarcastic utterances. I will briefly review ERP studies of sarcasm comprehension, and describe recent studies in my lab on this topic.

2009-11-10

The relationship between sound and meaning in spoken language

Lynne Nygaard Emory University

+ more

A fundamental assumption regarding spoken language is that the relationship between the sound structure of spoken words and semantic or conceptual meaning is arbitrary. Although exceptions to this arbitrariness assumption have been reported (e.g., onomatopoeia), these instances are thought to be special cases, with little relevance to spoken language and reference more generally. In this talk, I will review a series of findings that suggest that not only do non-arbitrary mappings between sound and meaning exist in spoken language, but that listeners are sensitive to these correspondences cross-linguistically and that non-arbitrary mappings have functional significance for language processing and word learning. These findings suggest a general sensitivity to cross-modal perceptual similarities may underlie the ability to match word to meaning in spoken language.

2009-11-03

Do Comprehenders Benefit When Their Interlocutors Repeat Their Labels and Structures?

Victor Ferreira

+ more

It is well established that speakers repeat their interlocutors’ words (Brennan & Clark, 1996) and structures (Pickering et al., 2000). But do comprehenders benefit if speakers use words and structures the comprehenders just used? A simple benefit in one-shot communicative interchanges has never been demonstrated.

Three experiments explored this issue. In each, subjects described or chose lexical or syntactic pictures that allowed more than one description. The experiments used a prime-target paradigm. For lexical pictures, on (non-filler) prime trials, subjects described a lexical picture how they wished. On target trials, subjects saw two pictures, one of which was the same as the prime. A confederate was scripted to describe that picture with either the same label or the other label. For syntactic pictures, on prime trials, subjects described a syntactic picture how they wished. On target trials, subjects saw two pictures; both had the same subject and verb but different objects, and the verb (but nothing else) was the same as in the prime. A confederate described one picture with either the same structure or the opposite structure. We measured latencies to select the described picture.

Experiment 1 explored the basic effect. Subjects chose pictures faster if the confederate repeated their labels (lexical trials) or structures (syntactic trials – even though prime and target sentence content differed!) compared to when the confederate did not. Thus, comprehenders do benefit when their interlocutors use the same labels and syntactic structures as they themselves just used.

Experiment 2 assessed whether the effect is partner specific. Half of trials were like in Experiment 1. For the other half, the computer (not the confederate) described targets. If benefits are observed even with computer descriptions, the effect is not partner specific (because computers can’t hear!). Subjects again chose pictures faster when they heard their own labels or syntactic structures repeated to them, both for confederate and for computer descriptions (for syntactic pictures, a benefit was not observed with computer descriptions in the original experiment, but was in a replication). Thus, benefits are not partner-specific.

A concern is that because subjects freely chose descriptions, the lexical effects might come from subjects’ preferences – subjects might think “fishtank” is an unusual name for the target. In Experiment 3, subjects came in one week and described all pictures. The next week, half of trials were prime-target, like in Experiment 2 (with computer descriptions). For the other half, primes were omitted, and targets were described with the same or other label or structure the subject used a week ago. Priming benefits were observed. But for lexical pictures, subjects were not faster if targets were described with the same label as they used the previous week (for syntactic pictures they were). Thus, priming effects can’t be reduced to preference effects.

Overall, comprehenders select pictures faster if they hear their own just-produced labels or syntactic structures. This isn’t partner specific, and it’s not because subjects prefer particular picture labels. This suggests repeating words and structures benefits communication.

2009-10-27

What can ERPs and fMRI tell us about language comprehension? Streams of Processing in the Brain

Gina Kuperberg

+ more

(Tufts University, Department of Psychology; Department of Psychiatry, Mass General Hospital; Martinos Center for Biomedical Imaging)

Traditional models of sentence comprehension have generally focused on the syntactic mechanisms by which words are integrated to construct higher order meaning. The assumption here is that single words are retrieved from the lexicon and then combined together through their syntactic representations. Any material stored within semantic memory, beyond the single word, is assumed to exert its influence by directly influencing syntactic combination or during a later phase of processing. I will discuss data using event-related potentials (ERPs) and functional Magnetic Resonance Imaging (fMRI) studies of language comprehension that challenge such assumptions. I will suggest that word-by-word syntactic-based combination operates in parallel with semantic memory-based mechanisms, giving rise to a highly dynamic system, with additional analysis occurring when the outputs of these distinct but interactive neural streams of processing contradict one another. The parallel operation of these processing streams gives rise to a highly dynamic interactive and balanced system that may be a fundamental aspect of language comprehension, ensuring that it is fast and efficient, making maximal use of our prior experience, but also accurate and flexible in the face of novel input. Indeed, it may be a more general feature of comprehension outside language domain: I will present data suggesting that analogous streams of processing may be engaged during our comprehension of real-world visual events, depicted in short, silent video-clips. Finally, I will suggest that imbalances between semantic memory-based and combinatorial streams of processing may help explain patterns of language abnormalities in various disorders. In particular, I will briefly discuss the syndrome of schizophrenia – a common neuropsychiatric disorder in which language processing can be dominated by semantic associations, at the expense of syntactic-based combination, possibly leading to symptoms of psychosis.

2009-10-20

The Electrophysiology of Speech Production: “It Is Time, Time Matters.”

Kristof Strijkers University of Barcelona / University Pompeu Fabra

+ more

Knowledge of the speed with which we process the core structures involved in speech production and the temporal relation between these different mental operations is vital for our understanding how we are able to speak. However, the time-course involved in speech production hasn’t received much attention in the literature and most of the chronometric information is derived from indirect and rather complex tasks (e.g., Indefrey & Levelt, 2004). In the present talk I aim to fill this gap by combining the fine temporal resolution of ERPs with simple overt picture naming. In particular, the electrophysiological signature in response to word retrieval will be explored. In order to obtain reliable time-course information, different lexical variables were manipulated in these tasks and contrasted to each other in the ERPs. In one such study we investigated frequency and cognate effects during overt picture naming and observed that both lexical variables elicited ERP differences starting ~185 ms after picture onset (Strijkers, Costa & Thierry, 2009). The frequency and cognate effects seemed especially sensitive to an early positive going ERP with its peak around 200 ms (P2) and a maximal scalp distribution at bilateral posterior sites. In the remainder of the talk I will present (a) some data exploring possible confounds/alternative explanations for these initial results; (b) a few experiments seeking convergent evidence using different manipulations; and (c) a picture naming study trying to characterize not only the onset but also the duration of word retrieval. The presented data reveal that the brain engages very quickly in the retrieval of words one wishes to utter and offers a clear time-frame of how long it takes for the competitive process of activating and selecting words in the course of speech to be resolved. These new steps towards a temporal map of speech may provide valuable and novel insights for understanding this remarkable human ability.

2009-10-13

Structural Commonalities in Human and Avian Song

Adam Tierney

+ more

While many aspects of human song vary cross-culturally, other features are widespread. For example, song phrases tend to follow an arch-like pitch contour, the final note of a phrase tends to be longer than the others, and large jumps in pitch tend to be followed by pitch movements in the opposite direction. One possible explanation for these regularities is that they are somehow genetically specified. Alternatively, the patterns could be a consequence of bodily constraints. If so, they should be found in the songs of birds as well, as both humans and birds produce songs using vibrating vocal folds driven by a pulmonary air source. Here we show that all three of these patterns are present in birdsong. We encoded the most taxonomically diverse set of birdsongs analyzed to date (from 54 families) as sequences of discrete pitches. The skip-reversal pattern and final lengthening were present at the level of the entire birdsong, while the arch contour was present at the level of the individual note, suggesting (as birds breathe between notes) that it is tied to the breath cycle. Furthermore, we found these patterns in spoken sentences from four different languages and instrumental classical themes written by composers from five different countries. Our results demonstrate that diverse communicative domains share a wide variety of statistical patterns, the result of shared bodily constraints. The auditory system likely takes advantage of the existence of these patterns, as they mark the beginnings and the ends of notes and phrases and have presumably been present for as long as the vocal apparatus has existed.

2009-10-06

Fore-words: Prediction in language comprehension

Kara Federmeier University of Illinois

+ more

Accumulating evidence attests that, during language comprehension, the brain uses context to predict features of likely upcoming items. However, although prediction seems important for comprehension, it also appears susceptible to age-related deterioration and can be associated with processing costs. The brain may address this trade-off by employing multiple processing strategies in parallel, distributed across the two cerebral hemispheres. In particular, we have shown that left hemisphere language processing seems to be oriented toward prediction and the use of top-down cues, whereas right hemisphere comprehension is more bottom-up, biased toward the veridical maintenance of information. Such asymmetries may arise, in turn, because language comprehension mechanisms are integrated with language production mechanisms only in the left hemisphere (the PARLO framework).

2009-06-02

Giving Speech a Hand: Neural processing of co-speech gesture in native English speakers and Japanese-English bilinguals as well as typically-developing children and children with autism

Amy L. Hubbard

+ more

Successful social communication involves the integration of simultaneous input from multiple sensory modalities.  Co-speech gesture plays a key role in multimodal communication, its effects on speech perception having been demonstrated on the behavioral and neural level (cf. McNeill, 2005; Willems et al., 2007).  We used an ecologically valid fMRI paradigm to investigate neural responses to spontaneously produced beat gesture and speech.  In our first study, we found that adult native English speakers show increased activity in superior temporal gyrus and sulcus (STG/S) while viewing beat gesture in the context of speech (versus viewing a still body or nonsense movements in the context of speech).   In our second study, we again observed increases in the BOLD signal in STG/S while Japanese ESL speakers viewed beat gesture in the context of speech (as compared to viewing a still body or gesture tempo in the context of speech).  These data suggest that co-speech gesture is processed (and/or integrated) in areas known to underlie speech perception, and meaningfulness of co-speech gesture is linked to its embodiment.  In our third study, we examined co-speech gesture processing in children with Autism Spectrum Disorder (ASD; a developmental disorder characterized by excessive deficits in social communication) and typically developing children.  Similar to our adult subjects, our typically developing matched controls showed increased activity in STG/S for viewing co-speech gesture (versus a still body with speech).  However, children with ASD showed no increases in STG/S for this same contrast.  These findings suggest that speech and gesture contribute jointly to communication during social interactions and that neural processes underlying co-speech gesture processing are disrupted in a clinical disorder well-known for its deficits in social communication.

2009-05-26

A new model of local coherences as resulting from Bayesian belief update

Klinton Bicknell
Joint work with Roger Levy UC San Diego & Vera Demberg University of Edinburgh

+ more

Most models of incremental sentence processing assume that the processor does not consider ungrammatical structures. However, Tabor, Galantucci, and Richardson (2004) showed evidence of cases in which a syntactic structure that is ungrammatical given the preceding input nevertheless affects the difficulty of a word, termed local coherence effects. Our work fills two gaps in the literature on local coherences. First, it demonstrates from two experiments with an  eye-tracking corpus that local coherence effects are evident in the reading of naturalistic text, not just rare sentence types like Tabor et al.'s. Second, it specifies a new computational model of local coherence effects under rational comprehension, proposing that local coherences arise as a result of updating bottom-up prior beliefs about the structures for a given string to posterior beliefs about the likelihoods of those structures in context. The critical intuition embodied in the model is that larger updates in probability distributions should be more processing-intensive; hence, the farther the context-conditioned posterior is from the unconditioned prior, the more radical the update required and the greater the processing load. We show that an implementation of our model using a stochastic context-free grammar (SCFG) correctly predicts the pattern of results in Tabor et al.

2009-05-19

Use of orthographic and phonological codes in early word recognition and short-term memory by skilled and less skilled deaf readers of French

Nathalie Bélanger

+ more

A small proportion of profoundly deaf individuals attain expert reading skills and it is important to understand why they become skilled readers and other deaf people do not. Despite the hypothesis that good phonological processing skills during reading are associated with good reading skills in deaf readers (Perfetti & Sandak, 2000), research has not yet provided clear answers as to whether it is the case or not. We investigated skilled and less skilled severely to profoundly deaf adult readers’ use of phonological codes during French word recognition and recall. A group of skilled hearing readers was also included as a means of comparison to existing literature. Given the close mapping of orthographic and phonological information in alphabetical writing systems, the unique contribution of orthographic codes was also investigated. Bearing in mind the particular focus on phonological processing in deaf (and hearing) readers and the potential implications on reading education for deaf children, it appears crucial to ensure that effects of orthographic and phonological information during word processing and recall are disentangled in this population. Results from a masked primed lexical decision task where orthographic and phonological overlap between primes and targets was manipulated show no difference between skilled hearing, skilled deaf and less skilled deaf readers in the way they activate orthographic and phonological information during early word recognition. The same groups of participants also performed a serial recall task where words were orthographically and phonologically similar (pierre, lierre, erre, etc), orthographically dissimilar and phonologically similar (chair, clerc, bière, etc), or orthographically and phonologically unrelated (ventre, manchot, oreille, etc). Skilled hearing readers showed a robust phonological similarity effect, but neither group of deaf readers (skilled or less skilled readers) did. All participants showed an advantage in recalling words that were orthographically and phonologically similar over the words that were orthographically dissimilar and phonologically similar suggesting that orthographic codes are also used to maintain words in short-term memory. The results of these two studies will be discussed and contrasted and will be presented in the context of reading instruction for deaf children.

2009-05-12

How the conceptual system gets started and why it might interest image-schema theorists

Jean Mandler

+ more

A good case can be made that the foundations of the conceptual system rest on a small number of spatial primitives. Object concepts (animal, vehicle), relational concepts (in, out), and abstract concepts (cause, goal) all begin on a purely spatial basis and can easily be represented by spatial image-schemas. Only later in development do concepts accrue bodily associations, such as feelings of force and motor information. Bodily feelings enrich concepts but their representation remains crude and less structured than spatial representation. I suggest that simulations used to understand events rely primarily on spatial image-schemas and do not necessarily, include bodily feelings.

2009-05-05

Enemies and friends in the neighborhood: cascaded activation of word meaning and the role of phonology

Diane Pecher (with René Zeelenberg) Erasmus University Rotterdam

+ more

Many models of word recognition predict that orthographic neighbors (e.g., broom) of target words (e.g., bloom) will be activated during word processing. Cascaded models predict that semantic features of neighbors get activated before the target has been uniquely identified. This prediction is supported by the semantic congruency effect, the finding that neighbors that require the same response (e.g., living thing) facilitate semantic decisions whereas neighbors that require the opposite response (e.g., non-living thing) interfere with semantic decisions. In a recent study we investigated the role of phonology by manipulating whether orthographic neighbors had consistent (broom) or inconsistent phonology (blood). Congruency effects in animacy decision were larger when consistent neighbors had been primed than when inconsistent neighbors had been primed. In addition, semantic congruency effects were larger for targets with phonologically consistent neighbors than to targets with phonologically inconsistent neighbors. These results in line with models that assume an important role for phonology even in written word recognition (e.g., Van Orden, 1987).

2009-04-28

Do lexical-syntactic selection mechanisms have rhythm?
Yet another "that" experiment

Vic Ferreira (with Katie Doyle and Tom Christensen)

+ more

Speech tends to be rhythmic, alternating strong and weak syllables. To promote alternation, speakers (of English, at least) change *how* they say things ("thirTEEN," but "THIRteen MEN"), but will they change *what* they say? Perhaps not. Words and structures may be selected only to convey speakers' messages. And, phonological information may become available too late to influence lexical and syntactic selection. In two experiments, speakers produced sentences like, "NATE mainTAINED (that) ERin DAmaged EVery CAR in SIGHT" or "NATE mainTAINED (that) irENE deSTROYED the BUMper ON the TRUCK." The optional "that," a weak syllable, would promote stress alternation if mentioned in the first sentence and omitted in the second. Speakers in Experiment 1 produced sentences from memory, and said "that" about 6% more in the first type of sentence than the second. But, memory involves comprehension and production, and evidence suggests that comprehension more than production prefers alternating stress. So speakers in Experiment 2 produced sentences by combining simple sentences into complex ones; now, no difference is observed. This suggests that in extemporaneous production, speakers do not choose words and structures to promote alternating stress.

2009-04-21

The hand that rocks the cradle rules the brain

Tom Bever

+ more

Fifty years of behavioral and clinical research supports the hypothesis that right handers with familial left handedness (RHFLH) have a distinct patterns of language behavior, which may reflect differences in neurological organization of the lexicon. RHFLH people organize their language processing with relative emphasis on individual words, while RHFRH people are more reliant on syntactic patterns. Recent fMRI studies support the idea that RHFLH people may access words more easily than RHFRH people because their lexicon is more bilaterally represented: syntactic tasks elicit left hemisphere activation in relevant areas for all subjects; corresponding lexical/semantic tasks elicit left hemisphere activation in RHFRH people, but bilateral representation in RHFLH people. This suggests that, while syntactic representation is normally represented in the left hemisphere, lexical information and access can be more widespread in the brain. This result has implications for clinical work and interpretation of many clinical and neurolinguistics studies that fail to differentiate subjects’ familial handedness. It also is suggestive about the language-specific neurological basis for syntax, amidst a more general basis for the lexicon.

2009-04-14

Are the Literacy Challenges of Spanish-English Bilinguals Limited to Reading?

Darin Woolpert

+ more

Spanish-English bilinguals (SEBs) represent 9% of students in U.S. schools. In California alone, we have 1.3 million SEB students - more than a third of that total. These children have well-established academic struggles, with literacy being a particular concern (Grigg, Donahue, & Dion, 2005; Lee, Grigg, & Donahue, 2007; Restrepo & Gray, 2007). These reading problems persist throughout their academic careers, with SEB children lagging behind their monolingual English (ME) peers in pre-literacy skills such as phonological awareness (FACES 2000, 2003), and those that graduate high school do so reading, on average, at the 8th grade level (Donahue, Voekl, Campbell, & Mazzeo, 1999). A great deal of research has focused on early emerging literacy skills (e.g., Dickinson, McCabe, Clark-Chiarelli, & Wolf, 2004; Rolla San Francisco, Mo, Carlo, August, & Snow, 2006), such as phonological decoding (word reading) and encoding (spelling), as this is a crucial first step towards literacy acquisition for ME children (Bialystok, Luk, & Kwan, 2005; Gough & Tunmer, 1986). Recent research, however, has suggested that later-emerging skills, such as morphosyntactic awareness and reading comprehension, are the most problematic for SEB children (August & Shanahan, 2006), leaving questions about the origins of these deficits and the best way to address them. Children with a first language of Spanish may struggle to learn to decode in English due to typological differences such as the opacity of English orthography (seen in Bernard Shaw’s suggestion of "ghoti" as an alternate spelling for "fish"). Alternatively, SEB children may be struggling to build their literacy skills on a shaky foundation of spoken English, leading to problems as they get older.

To evaluate these competing claims, we gave standardized tests of spoken (sentence repetition and vocabulary) and written language (spelling and reading) to 53 SEB students from kindergarten to second grade, as well as a spoken and written narrative task. The children performed at age level in regards to spelling and reading (i.e., early-emerging literacy). The children tested below the normal range on the sentence repetition and vocabulary tasks, however. On the narrative task, the children struggled with verb morphology in both the spoken and written domains, with no significant differences in terms of error rate between the first and second graders.

These findings support those reported by August and Shanahan, and suggest that SEB children do not have problems with word decoding, but rather struggle to acquire literacy due to a lack of proficiency with English overall. This has implications for interventions developed for ME children with reading problems, and for the issue of properly diagnosing language impairment in ME children given the language profile of SEB children (e.g., Paradis, Rice, Crago, & Marquis, 2008). Directions for future research will be discussed.

2009-04-07

"Point to where the frog is pilking the rabbit”: Investigating how children learn the meaning of sentences

Caroline Rowland University of Liverpool, UK

+ more

A unique but universal quality of language is the fact that the structure (or form) of a sentence affects its meaning. To master a language, learners must discover how sentence structure conveys meaning - the form-function mapping problem. This task is complicated by the fact that different languages require speakers to encode different aspects of the event; for example, in Spanish and German (but not in English) a speaker can change the order of the words without necessarily changing the meaning of the sentence, in German (but not English or Spanish), nouns must be marked for case, and in Spanish (but not English or German), speakers must use a grammatical patient marker if the object affected is animate.

Despite the apparent complexity of the task, recent research suggests that certain aspects of form-function mapping are learned very early on. For example, even before two years of age, English children can use word order to identify who is doing what to whom, detecting that transitives with novel verbs such as "the rabbit is glorping the duck" must refer to a cartoon showing a rabbit acting on a duck, not one in which a duck acts on a rabbit. However, it is unclear whether early ability is limited to frequently heard, simple structures like the transitive, or extends to other, more complex ones. This has implications for the amount of knowledge we attribute to young children and how we characterise the acquisition process. In addition, previous work often focuses only on showing that young children can understand form-function mappings, without investigating what it is that may underlie their performance (e.g. what might be the nature of any innate biases, what cues to meaning are most salient in the language children hear).

In this talk, I will present a number of studies using a new forced-choice pointing paradigm to investigate 3- and 4- year-old English and Welsh children's comprehension of two structures that are less frequent and more complex than the transitive . the prepositional and double object dative. The results demonstrate that English and Welsh children have some verb-general knowledge of how dative syntax encodes meaning soon after their third birthday but that this is not always enough for successful comprehension. Cross- and within-language differences suggest that the correct interpretation of datives relies on the presence of a number of surface cues, and that the children's ability to use a cue depends on its frequency and salience in child directed speech. Implications for theories of grammar and verb learning are discussed.

2009-03-17

Resolving Conflicting Information from First-Mention Biases and Discourse Event Structure in Ambiguous Pronoun Interpretation in a Short Story Paradigm

Anna Holt and Gedeon Deak

+ more

Making anaphoric judgments in a discourse context holds several novel challenges when compared with simple, intra-sentential anaphoric resolution. Adults use the lexical features of a pronoun (e.g. gender, animacy, and number) as the most reliable source of information for disambiguation. However, when lexical features of a pronoun are underspecified, adults use conflicting strategies with which to determine the referent of a pronoun.  Adults have a well-known preference for considering the first out of two or more entities in a sentence—often the grammatical subject and the continuing discourse topic-- as the most salient one (preferred pronoun referent) (Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000). However, recent work (Rohde, Kehler and Elman, 2007) suggest adults also use strategies which take into account event structure and discourse cohesion when determining the referent of a pronoun in an inter-sentential story completion paradigm. For instance, participants prefer interpretations consistent with ongoing action (e.g. adults spontaneously produce more goal continuations for pronouns following sentences with a perfective verb than an imperfective verb.) We tested how adults resolve conflicting cues to inter-sentential pronoun interpretation, including the first mentioned entity, the most frequently named entity, and the entity predicted by verb aspect and verb semantics. We created a set of five sentence short stories which involve two actors. The two actors participate in a short exchange using a transfer of motion verb. An ambiguous pronoun undergoes an intransitive action, and participants are asked to choose which actor is the referent of the pronoun. Throughout these stories, we vary 1) whether or not the current topic is also the initial subject of the first sentence (presence or absence of a topic switch) 2) whether or not the last-mentioned actor is also the initial subject of the first sentence and finally 3) whether the event structure predicted by the intransitive verb suggests a goal continuation from the actor in the transfer-of-motion sentence or a source continuation. We collected responses as the reaction time to choose the appropriate actor following story presentation and the percentage of choices of initial story topic. Future work will additionally collect eye-tracking data, as pronoun resolution in unambiguous situations is typically resolved with 200ms (Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000).

No ERP evidence for automatic first-pass parsing: Pure word category violations do not elicit early negativity

Lisa Rosenfelt, Christopher Barkley, Kimberly K. Belvin, Chia-lin Lee, Kara Federmeier, Robert Kluender, and Marta Kutas

+ more

Certain neurocognitive processing models [1,2] map early left anterior negativity (eLAN) onto automatic first-stage parsing—because it is elicited by purported grammatical category violations, by hypothesis interfering with initial syntactic assignments—and late positivity (P600) onto processes of reanalysis occurring later in time. Crucially, however, eLAN (followed by a P600) has been reliably elicited only by words following missing nominal heads, as in Max's __ OF [3] and im __ BESUCHT (“visited in the __”) [4]. In the latter, most common paradigm, the violation occurs when a verb replaces the expected noun. Thus noun/verb violations that do not elicit early negativity [5,6] and grammatical verb gapping that does [7,8] become relevant to the discussion.

We compared ERP responses to word category violations with (a,b: “ungapped”) and without (c: “gapped”) phrasal heads in stories that required reading for comprehension rather than monitoring for syntactic well-formedness.

[...]

In sum, violation of the expected grammatical category of an incoming word is not a sufficient (i) [5,6] condition for eliciting early negativity; it seems to be reliably elicited only in paradigms that gap phrasal heads (ii) [3,4,7,8,9]. If early negativity is sensitive to gapping rather than to grammatical category per se, it cannot be the index of an automatic first-pass parse assigning preliminary syntactic structure. Without a reliable ERP index of modular first-pass parsing, a crucial piece of neurocognitive evidence in support of serial parsing models is called into question.

2009-03-10

Two is not better than one: The consequences of translation ambiguity for learning and processing

Natasha Tokowicz

+ more

Many words have more than one translation across languages. This so-called “translation ambiguity” arises mainly from existing ambiguities within a language (e.g., near-synonymy and lexical ambiguity) and poses a number of potential problems for language learning and on-line processing. My colleagues and I have explored these issues in a series of experiments that have tested individuals at different proficiency levels and that have used different pairs of languages. In this talk, I will summarize this research and discuss the implications of translation ambiguity for second language learning and processing, and the potential for this research to inform models of language processing more generally.

2009-03-03

Talker information facilitates word recognition in real time

Sarah Creel

+ more

Recent interest in talker identity as a factor in language interpretation (e.g. Van Berkum et al., 2008) raises questions about how listeners store and utilize talker-specific information. Talker identification might conceivably involve processes external or orthogonal to language comprehension, and thus would only affect interpretation on a relatively slow time scale (lengthy discourse or a sentence, or after word offset). Alternately, talker identity might be readily available in the same stored representations used for word identification (Goldinger, 1998). If the latter account is correct, then listeners should be able to use talker variation not only in long-time-scale but short-time-scale language comprehension (words). In the current study, we found that talker variation affects word recognition prior to the point that speech-sound information is useful. This supports the notion that listeners represent phonemic and nonphonemic variability in conjunction, though it remains possible that separate lexical and episodic information combine to determine these effects. We are currently exploring the acoustic specificity of these representations, and the role of lengthier acoustic context on normalization.

2009-02-24

How Our Hands Help Us Think About Space

Susan Goldin-Meadow

+ more

Language does not lend itself to talking about space.  Space is continuous, language is discrete.  As a result, there are gaps in our talk about space.  Because gesture can capture continuous information, it has the potential to fill in those gaps.  And, indeed, when people talk about space, they gesture.  These gestures often convey information not found in the words they accompany, and thus provide a unique window onto spatial knowledge.  But gestures do not only reflect a speaker’s understanding of space, they also have the potential to play a role in changing that understanding and thus play a role in learning.

2009-02-17

Rhythm, Timing and the Timing of Rhythm

Amalia Arvaniti

+ more

The notion that languages can be rhythmically classified as stress- or syllable-timed has gained increased popularity since the introduction of various metrics -- such as the PVIs of Grabe & Low (2002) or the %V-ΔC of Ramus et al. (1999) -- that seek to quantify the durational variability of segments and use this quantification as a means to rhythmically classifying languages. Since rhythm metrics have been used extensively to support research on language acquisition and speech processing that relies on the idea of languages belonging to one of two rhythmic types, it is important to critically examine both the empirical basis and theoretical assumptions behind metrics.

I show that the success of metrics at rhythmic classification is much more modest than originally anticipated. I argue that this lack of success has its origins in the misinterpretation and simplification of Dauer's original ideas, on which metrics are said to be based, and in particular on the confounding of segmental timing with rhythm. I further argue that these problems cannot be corrected by "improving" on metrics, due to (a) the lack of independent measures associated with the notion of metrics and rhythmic classification in general, and (b) the psychological implausibility of the notion of syllable-timing in particular.

I propose that in order to understand rhythm, it is necessary to decouple the quantification of timing from the study of rhythmic structure, and return to Dauer's original conception of a rhythmic continuum ranging from more to less stress-based. A conception of rhythm as the product of prominence and patterning is psychologically plausible, and does not rely on a questionable and ultimately unsuccessful division between languages or the measuring of timing relations. A proposal along these lines and data from a production experiment are presented.

2009-01-27

Brain Indices of Syntactic Dependencies in Japanese
Evidence for Language-Universal and Language-Specific Aspects of Neurocognitive Processing

Mieko Ueno

+ more

One of the more challenging questions in sentence processing has been whether all parsing routines are universal, or whether some can or need be language-specific. In this talk, I present a series of event-related brain potential (ERP) studies of syntactic dependencies in Japanese, including scrambling, relative clauses, and wh-questions, to shed light on this question.

Previous ERP studies of wh-movement languages such as English and German (e.g., Kluender & Kutas, 1993; King & Kutas, 1995; Fiebach et al., 2002; Felser et al., 2003; Phillips et al., 2005) report left-lateralized anterior negativity (LAN) elicited between the displaced wh-fillers and their gaps. LAN is thought to index increased verbal working memory load due to a dependency between a wh-filler and its gap. In addition, late positivity has been reported at the gap position, which is said to index the syntactic integration cost of the displaced filler (Kaan et al., 2000; Fiebach et al., 2002; Felser et al., 2003; Phillips et al., 2005).

Unlike English and German, Japanese is an SOV wh-in-situ language that allows scrambling. In addition, Japanese relative clauses are prenominal instead of postnominal, and these typological differences affect the nature of the processing demands for filler-gap dependencies in Japanese. However, despite these striking differences, the way the brain processes syntactic dependencies in Japanese looks remarkably familiar. While only known ERP components are elicited in these contexts, pointing to universality of parsing, they pattern and combine in subtly different enough ways to create a profile that on aggregate can accommodate and do justice to the language-specific features of Japanese as well.

2009-01-20

Integrating Conceptual Knowledge Within and Across Representational Modalities

Chris McNorgan

+ more

Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected by cascading integration sites with successively wider receptive fields. Deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for pattern completion, whereas shallow models predict no difference for either task. The pattern of decision latencies across a series of complementary behavioural studies using both feature inference and pattern completion is consistent with a deep integration hierarchy.

2009-01-13

Wendy Sandler

The Kernels of Phonology in a New Sign Language

+ more

The property of duality of patterning – the existence of two levels of structure, a meaningful level of words and sentences alongside a meaningless level of sounds – has been characterized as a basic design feature of human language (Hockett 1960). Some have also argued that phonology must have existed prior to hierarchical syntactic structure in the evolution of language (Pinker & Jackendoff 2005). Sign languages were admitted to the 'bona fide language club' only after Stokoe (1960) demonstrated that they do exhibit duality. But is it possible for a conventionalized language to exist without a fully developed phonological system – without duality?

Using evidence from a sign language that has emerged over the past 75 years in a small, insular community, I will show that phonology cannot be taken for granted. The Al-Sayyid Bedouins have a conventionalized language with certain syntactic and morphological regularities (Sandler et al 2005, Aronoff et al 2008), but the language is apparently still in the process of developing a level of structure with discrete meaningless units that behave systematically. In other words, we don't find evidence for a full-blown phonological system in this language.

Can a language go on like this?  Data from children and from families with several deaf people help to pinpoint emerging regularities and complexity at the level of meaningless formational elements in ABSL.  While phonology in language cannot be taken for granted, then, its existence in all older languages, spoken and signed, suggests that it is inevitable. Rather than assume that phonology is somehow 'given' or hard-wired, this work leads us to ask, Why and how does it arise?

2008-12-02

Dynamics and Embodiment in Language Comprehension

Michael Spivey

+ more

There are several findings that suggest bi-directional influences between language and vision. From visual search to sentence comprehension, I will discuss experimental results suggesting that sometimes language can tell vision what to do, and sometimes vision can tell language what to do. Along with many other studies, these findings of fluid interaction point toward an account of perceptual/cognitive processing that can accommodate linguistic and visual processes in a common format of representation: a "continuity of mind", if you will.

2008-11-25

Oral and Gestural Motor Abilities and Early Language - Talking with the Mouth

Katie Alcock

+ more

New evidence is accumulating to link both the ontogenesis and the phylogenesis of language to motor control (Arbib, 2005; Hill, 2001). This is in addition to much long-standing evidence linking communicative gesture to early language and to delays in early language (Bates et al., 1979; Thal et al., 1997).

However, most children learning language are learning a spoken language, and some children who have language-learning difficulties also have difficulties with oral motor control (Dewey et al., 1998; Gernsbacher 2008).  We set out to compare the relationships between manual gesture, early language abilities, and oral motor control, controlling for overall cognitive ability, in typically developing children aged 21 months, followed up at age 36 months (N=58).

At 21 months relationships were found between vocabulary production, production of complex language, and vocabulary comprehension (measured using a British version of the McArthur-Bates CDI, Hamilton et al. 2000), and oral motor abilities, with an additional relationship between vocabulary comprehension and memory for manual gesture sequences. After controlling for cognitive ability and SES, however, only oral motor control was related to language production, and only cognitive ability was related to language comprehension.

At 36 months concurrent relationships were found between oral motor control, imitation of meaningless gestures, and expressive and receptive language (as measured on the Preschool Language Scale).  When performance at 21 months was controlled for, 36 month expressive language was most strongly related to oral motor abilities at 36 months.  Receptive language abilities at 36 months were predicted by 21 month vocabulary comprehension, and in addition were related to meaningless manual gesture imitation ability at 36 months.

We concluded that the articulatory component of both the tests of nonverbal oral motor abilities and of the language assessments is likely to mean these assessments are measuring very closely overlapping abilities.  Children who are learning to speak with their mouths seem to either need good oral motor skills, or develop these as a result of articulatory practice.

On the other hand, imitation of meaningless gesture is likely to draw heavily on children's visuo-spatial abilities and/or executive function abilities, and hence be related to language comprehension abilities.  I will discuss in addition the potential use of early manual and oral motor assessments in predicting later language delay.

2008-11-18

Successful Bilingualism: What the Spanish-Minors Reveal About Language Production

Tamar Gollan

+ more

Our ability to speak is one of the things that arguably makes us most different from animals. Extending this thought into a continuum would seem to place people who speak multiple languages further from monkeys than people who speak just one, and indeed many people go to great lengths to become proficient in more than one language. But high levels of proficiency in more than one language lead to some subtle but significant processing costs for dominant-language fluency. In previous talks I have told you about how early bilinguals name pictures more slowly, have reduced verbal fluency, and more naming failures (tip-of-the-tongue states), than monolinguals. It might seem that bilingual disadvantages for language tasks should primarily be attributed to interference between languages, however, I have argued that early bilingual disadvantages are (in most cases) best explained by assuming reduced frequency-of-use relative to monolinguals. In this talk I will tell you about the effects of proficient late-bilingualism on dominant-language fluency, TOT rates, and picture naming times. These data reveal a role for between-language interference in bilingual language production, and provide clues as to when competition during lexical selection is fiercest. Studies of late second-language acquisition typically focus on proficiency in the second language. In these experiments we take a different approach by focusing on how learning a second language affects your first language. Our data reveal that early and late bilingualism have some similar but also some different consequences for dominant language production. These contrasts provide clues as to what leads to successful bilingualism while also revealing the mechanisms fundamental to proficient language use in speakers of all types (mono- and bilingual).

2008-11-04

Gradiency in Syntactic Garden-Paths Revealed by Continuous Motor Output

Thomas Farmer

+ more

On-line syntactic processes are highly dynamic and interact with other perceptual and representational systems in real time. In this talk, I present a series of studies that utilize the "visual-world" paradigm to assess how scene-based referential context impacts the resolution of structural ambiguities. In the paradigm employed here, participants click and move objects around a visual display in response to instructions that contain temporary syntactic ambiguities. To complement previous eye-tracking work, our most recent work presents continuous data from the action phase of each trial. Nonlinear trajectories were recorded from computer-mouse movements, providing a dynamic and continuous dependent measure that can reveal subtle gradations in the resolution of temporarily ambiguous "garden-path" sentences. Analysis of movement trajectories revealed that when a scene-based context supports the incorrect analysis of the ambiguity, movement trajectories curve subtly toward a location on the screen corresponding to the incorrect interpretation, before terminating at the ultimately correct display-location. When the visual context supports the ultimately correct interpretation, however, no commensurate curvature is observed. These effects, evident in the dynamic data produced within each trial, fail to support a characterization of syntactic structure interpretation as an all-or-nothing process in which a discrete re-analysis either will or will not be required. Instead, they highlight the gradiency in the degree to which correct and incorrect syntactic structures are pursued over time, thus providing support for competition-based accounts of constraint-based sentence processing.

2008-10-28

Child-Driven Language Input:  Insights from Young Signing Children

Amy Lieberman

+ more

Successful communication in sign language requires individuals to maintain visual attention, or eye gaze, with their interlocutors. For deaf children, visual attention is the source of both linguistic and non-linguistic input, thus the ability to obtain and maintain attention becomes a crucial factor in language development. This study explored initiations and turn-taking behaviors among deaf children and deaf and hearing adults in a classroom setting in which ASL was the primary mode of communication. Analysis of peer interactions revealed that children used objects, signs, and conventional attention-getters (e.g. waving or tapping) to obtain attention. Analysis of individual differences showed a high correlation between age, language proficiency, and the number and type of initiations attempted. By the age of two, children exhibited the ability to actively manage their own communication using complex attention-getting and turn-taking behaviors. These findings suggest that early and consistent exposure to sign language enables children to develop the meta-linguistic skills necessary for interaction in a visual language.

2008-10-21

Do Phonological Awareness and Coding Predict Reading Skill in Deaf Readers? A Meta-Analysis

Alex Del Giudice

+ more

Phonological awareness, or coding, skills are hypothesized to play a key role in reading development for readers who hear, although the direction and size of these effects is controversial. We investigated the relation between phonological awareness/coding skills and reading development in readers who are deaf with a meta-analysis. From an initial set of 230 relevant publications addressing this question, we found 25 studies that measured the relationship directly and experimentally. Our analyses revealed that the average relationship of phonological awareness/coding to reading level in readers who are deaf is low to medium in size, though variability is high. Variables such as experimental task, reading measure, and reader characteristics can explain the variation across study results. The small and unreliable relation between phonological awareness/coding and reading in the deaf population suggest that it plays a minor role in their reading achievement.

2008-10-14

The influence of plausibility on eye movements during reading

Keith Rayner

+ more

It is a well-known and highly robust finding that word frequency and word predictability have strong influences on how long readers fixate on a word and the probability of skipping the word.  Recently, other variables, like age-of-acquisition and plausibility, have also been demonstrated to influence eye movements. In this talk, I will review our initial investigation of plausibility effects and also discuss more recent studies we have completed dealing with how plausibility influences eye movements. Parallels will be drawn to research on word frequency effects and also garden path effects in sentence parsing.   Implications of the research findings for models of eye movement control and on-line sentence processing will be discussed.

2008-10-07

Speaking vs. signing:  How biology affects the neurocognitive processes for language production

Karen Emmorey Professor, Speech, Language, and Hearing Sciences
San Diego State University

+ more

Sign languages provide crucial insights into what aspects of language processing are affected by the perceptual systems engaged for comprehension (vision vs. audition) and by the motor systems used for production (the hands vs. the vocal track).  In this talk, I will discuss whether and how the different biological properties of sign and speech impact the neural systems that support language production.  In addition, I will present a set of experiments that explore the distinct properties of the perception-production interface for signing compared to speaking.  These experiments explore whether visual feedback plays the same role as auditory feedback during language production.

2008-06-03

How the perceptual system adjusts to speaker variability

Tanya Kraljic

+ more

Perceptual theories must explain how perceivers extract meaningful
information from a continuously variable physical signal. In the case of
speech, the puzzle is that little reliable acoustic invariance seems to
exist. In the experiments I will present, I tested the hypothesis that
speech-perception processes recover invariants not about the signal, but
rather about the source that produced the signal. Findings from two
manipulations suggest that the system learns those properties of speech
that result from idiosyncratic characteristics of the speaker; the same
properties are not learned when they can be attributed to incidental
factors. The question then becomes: How might the system distinguish these
properties? The experiments suggest that in the absence of other
information about the speaker, the system relies on episodic order: Those
properties present during early experience are represented. This
"first-impressions" bias can be overridden, however, when additional
information provided to the system suggests that the variation is an
incidental consequence of a temporary state (a pen in the speaker's
mouth), rather than characteristic of the speaker.

2008-05-27

Children's interpretation of third person present -s as a cue to tense

Tim Beyer

+ more

While comprehension generally precedes production in development, this may not be true for 3rd person present –s. Studies have shown that even 5- and 6-year old children do not yet understand all the meanings encoded by –s (de Villiers & Johnson, 2007; Johnson, de Villiers, & Seymour, 2005; Keeney & Wolfe, 1972). Here, we examine whether 6- and 7-year old Standard American English speaking children comprehend the temporal information encoded by –s, as compared to lexical items and past tense –ed. Experiment 1 assessed off-line performance and found that all children successfully interpreted the lexical items and –ed, but only the 7-year olds successfully interpreted –s. Eye-tracking measures in Experiment 2 confirmed these results and revealed that the 6-year olds are also sensitive to –s as a cue to tense, but it may not be a strong cue at this age. We argue that the relatively late acquisition of –s is due to characteristics specific to –s that makes its meaning less transparent than other tense morphemes, such as –ed.

2008-05-20

Gesture as input in language acquisition

Whitney Goodrich University of California, Berkeley

+ more

In every culture of the world, whenever you hear people speaking, you see people moving their hands. These co-speech gestures are not random, but contain meaningful information. My research explores the extent to which listeners are sensitive to the information conveyed in gesture, and whether this can be a source of input for young children acquiring language. I will be discussing research demonstrating that both children and adults rely on gesture to inform their interpretation of novel verbs and ambiguous pronouns, and discuss how gesture may help children learn to understand anaphora.

2008-05-13

The Deictic Urge

Kensy Cooperrider

+ more

Pointing is a common accompaniment to speech. Yet researchers in the cognitive sciences have been much more interested in the pointing behaviors of orangutans and infants than in how pointing is used in fully adult, fully human discourse. As a result, we know very little about how, when, and why adult speakers point in face-to-face interaction. In this talk I will: 1) discuss the results of a recent armchair ethnographic exercise in which we analyzed 45 pointing gestures from a 1990 interview between Michael Jordan and Arsenio Hall; 2) introduce the idea that pointing reflects a deictic urge-- that is, a human urge to anchor the entities of discourse in real or conceptual space as we speak; and 3) describe a series of observational studies I am conducting to investigate the how, when, and why of pointing gestures. I argue that, in addition to being a phenomenon of central interest to gesture studies, pointing provides a crucial window into the role of spatial thinking for speaking.

2008-05-06

The Spatiotemporal Neural Dynamics of Word Knowledge in Infants

Katie Travis

+ more

The learning of words is one of the first and most important tasks confronting the young child. By 12 months of age, children are already capable of learning words and have also started speaking. Decades of behavioral research have provided important insights into how infants learn their first words. Yet, as important as this process is, we know virtually nothing about the neural mechanisms that make early word learning possible. Thus, in order to better understand how the infant mind acquires language, it will be important to determine when and where word learning occurs in the infant brain.

Taking a neurobiological perspective of language development, I propose to study neural activity related to language processes in infants by combining non-invasive brain imaging technologies such as magnetoencephalography (MEG) and structural magnetic resonance imaging (MRI). In this way, I will be able to obtain both functional/temporal (MEG) and anatomical (MRI) information about when and where activity related to language processing occurs in the developing brain. Combining these techniques will also help me to overcome some of the limitations of the individual technologies and the inherent difficulties of imaging infants. For this talk, I will be discussing how I have adapted MEG and MRI techniques to study neural processes related to early language development in young infants. Specifically, I will be describing preliminary results from an initial study aimed at investigating the spatial and temporal dynamics of semantic knowledge in infants ages 12-15 months.

2008-04-29

Verb Argument Structure In The Language Of Latino Preschoolers With And Without Language Impairment: Preliminary Findings

Gabriela Simon-Cereijido

+ more

Previous research has indicated that English-speaking and Spanish-speaking children with language impairment (LI) have difficulties with verbs of increased number of arguments in spontaneous language. The purpose of my project is to evaluate the role of verb argument structure (VAS) in the language of Latino Spanish-speaking preschoolers with and without LI who are English Language Learners. The specific goals of this study are to examine whether: 1) whether children with LI have more omissions of verbs and arguments than age- and language-matched controls in a Spanish picture description task and 2) whether children with LI are less accurate with ditransitive verbs than with transitive and intransitive verbs. If children with LI omit more verbs and arguments than language-matched controls, this may indicate that VAS deficits are specific to LI and not developmental. In addition, if children with LI have more errors with ditransitive verbs than with the other verbs, this may suggest that processing capacity limitations hinder their production of predicates with more arguments due to increased processing load. Alternatively, if their omission rates are not more pronounced with predicates with more arguments, this may point to limitations in the overall verb system. Ultimately, this project's findings will inform clinical issues related to assessment and intervention of Latino ELLs, a growing segment of the pediatric population.

2008-04-22

Surprisal as optimal processing

Nathaniel Smith

+ more

I present a theoretical and empirical investigation of the precise role that probability in context, P(word|context), plays in reading. It is well known that words which are more predictable are also read more quickly (e.g. Ehrlich and Rayner, 1981). Not yet known, however, is the precise functional form of this relation; authors have suggested that probability's contribution is logarithmic, linear, or even reciprocal, while empirical work has made only factorial comparisons that provide limited insight into curve shape. There is also as yet no consensus on why these effects occur, or take whatever form they do. We address these issues by (a) presenting a simple theoretical model of reading time which explains its sensitivity to probability as arising within an optimal processing framework, and which strongly predicts a logarithmic relation between probability and time; and, (b) giving supporting evidence from an empirical study of reading times.

2008-04-15

Toward a Discourse Model of Ellipsis

Laura Kertz

+ more

I present results from a series of magnitude estimation experiments which demonstrate the effect of information structure on ellipsis acceptability.  I show how these results reconcile apparently contradictory claims in the literature, where information structure has previously been confounded with other levels of representation, including syntax and discourse coherence. I also discuss the implications of these findings for recent processing-based models which link ellipsis acceptability to the specific task of antecedent reconstruction, and compare the predictions of those models to a more general discourse processing approach.

2008-04-08

Modeling uncertainty about the input in online sentence comprehension

Roger Levy

+ more

Nearly every aspect of language processing is evidential---that is, it requires informed yet uncertain judgment on the part of the processor. To the extent that language processing is probabilistic, this means that a rational processing strategy could in principle attend to information from disparate sources (lexical, syntactic, discourse context, background world knowledge, visual environment) to optimize rapid belief formation---and there is evidence that information from many of these sources is indeed brought to bear in incremental sentence comprehension (e.g., MacDonald, 1993; Frazier & Rayner, 1982; Rohde et al., 2008; McRae et al., 2005; Tanenhaus et al., 1995). Nevertheless, nearly all formalized models of online sentence comprehension implicitly contain an important interface constraint that limits the use of cross-source information in belief formation: namely, the "input" to the sentence processor consists of a sequence of words, whereas a more natural representation would be something like the output of a word-recognition model---a probability distribution over word sequences.  In this talk, I examine how online sentence comprehension might be formalized if this constraint is relaxed.  I show how generative probabilistic grammars can be a unifying framework for representing both this type of uncertain input and the probabilistic grammatical information constituting a comprehender's knowledge of their own language.  The outcome of the comprehension process is then simply the intersection of a probabilistic input with a probabilistic grammar.  I then show how this model may shed light on two outstanding puzzles in the sentence comprehension literature: (i) data underlying the "good enough representation" approach of (F.) Ferreira et al. (2003), such as (1) below:

While Anna dressed the baby spit up in the bed.

where "the baby" is taken by many readers to be both the theme of "dressed" and the agent of "spit up", and (ii) the local-coherence effects of Tabor et al. (2004), in which sentences such as (2) below:
 
The coach smiled at the player tossed the frisbee.

elicit what are apparently classic garden-path effects despite the fact that global context seemingly should rule out the garden path before it is every pursued.

2008-03-04

The Activation of Verbs in Sentences involving Verb Phrase Anaphors

Sarah Callahan

+ more

Numerous psycholinguistic studies have focused on the processing of noun phrase (NP) anaphors (e.g. pronouns). This research has suggested that the presentation of a noun activates its lexical representation, that this activation declines rapidly over the next 700-1000ms and, critically, that the presentation of a co-referential anaphor immediately re-activates this representation (c.f. Nicol & Swinney, 2002). In contrast, comparatively few studies have investigated verb phrase (VP) anaphors, so although it is clear that the presentation of a verb activates its lexical representation (including meaning, argument structures, and thematic roles (e.g. Ferretti, McRae, & Hatherell, 2001; Shapiro, Zurif, & Grimshaw, 1987)), little is known about the duration of this activation and any (re-)activation at a corresponding anaphor.

The current study comprises two experiments using cross-modal lexical priming to investigate the activation of a verb throughout two conjoined sentences involving a VP anaphor (e.g. did too). The results indicated that activation related to the initial presentation of the verb was undetectable by a point approximately 1500ms following presentation. This finding fits with evidence from nouns that activation related to the initial presentation decays relatively quickly; on the other hand, contrary to typical findings for nouns, the verb was active at all points tested in the second sentence rather than just at the corresponding anaphor. Based on the points tested, this pattern of results suggests the verb was reactivated following the conjunction (i.e. and) and that this activation was maintained throughout the second sentence at least until a point immediately following the anaphor. Overall, these findings suggest important differences in the activation of verbs and nouns during sentence processing and highlight the need for further work on this issue.

2008-02-19

Emergent Conceptual Hierarchies and the Dynamics of Similarity

Ken McRae University of Western Ontario

+ more

People's knowledge of concrete nouns usually is viewed as hierarchical. The goal of the present research is to show that behavior that appears to implicate a hierarchical model can be simulated using a flat attractor network. The network learned to map wordforms for basic-level concepts to their semantic features. For superordinate concept learning, wordforms were paired equally often with one of its exemplar's representations so that typicality was not built into the training regime, and the network developed superordinate representations based on experience with exemplars. We established the basic validity of the model by showing that it predicts typicality ratings. Previous experiments have shown roughly equal superordinate-exemplar priming (fruit priming cherry) for high, medium, and low typicality exemplars. Paradoxically, other studies and attractor network simulations show that basic-level concepts must be highly similar to one another to support priming. We conducted an experiment and simulation in which priming was virtually identical for high and medium/low typicality items. In the model, unlike features of basic-level concepts, superordinate features are partially activated from a wordform due to a superordinate's one-to- many mapping. Thus, it is easy for a network to move from a superordinate representation to the representation of one of its exemplars, resulting in equivalent priming effects regardless of typicality. This research shows that a flat attractor network produces emergent behavior that accounts for human results that have previously been viewed as requiring a hierarchical representational structure, and provides insight into temporal aspects of the influences of similarity.

Meaning, structure, and events in the world

Mary Hare Bowling Green State University

+ more

Fundamental issues in the representation and processing of language have to do with the interface among lexical, conceptual, and syntactic structure. Meaning and structure are related, and one view of this relationship is that lexical meaning determines structure. In this talk I will argue that the relevant generalizations are not based on lexical knowledge, but on the language user's interpretation of generalized events in the world. A set of priming studies will demonstrate that nouns denoting salient elements of events prime event participants. In addition, corpus analyses and self-paced reading studies will show that difference senses of a verb reflect variations on the types of event that the verb refers to, and that this knowledge leads to expectations about subsequent arguments or structure during sentence comprehension.

2008-01-22

Routine Validation of Explicit and Implicit Ideas in Reading

Murray Singer

+ more

There is extensive evidence that understanding a sequence as simple as Dorothy poured the water on the bonfire. The fire went out requires the inference that the first event caused the second. Here, it is further proposed that the elements of this sequence must be validated against antecedent text ideas or relevant world knowledge before the inference is accepted by the reader. Otherwise, it would appear neither more nor less coherent than Dorothy poured the water on the bonfire. The fire grew hotter.

Evidence is presented that readers engage in such validation processes in the construction of inferences derived from narrative and expository text, and even for explicitly stated text ideas. These findings are interpreted with reference to a constructionist analysis of discourse comprehension, two assumptions of which are that readers (a) maintain coherence at multiple levels of text representation and (b) try to explain why actions, events, and states are mentioned in the message.

2007-12-04

The Neural Correlates of Figurative Expressions

Dieter Hillert

+ more

The linguistic design of the human language system is typically based on assumptions about the compositional structure of literal language. However, it has been estimated that for instance in American English people use at least 25,000 idiomatic-like expressions. The talk will therefore focus on the cognitive and neural correlates of figurative language comprehension. An account of the human language system is suggested that divides between a left-sided core language system and a bilateral pragmatic language network. Comprehension of idiomatic expressions that involve alternative parsing strategies correlates with an increase of cognitive costs compared to comprehension of non-figurative default sentence structures. The costs associated with idiom processing seem to be compatible with those related to resolving syntactic ambiguities or reconstructing canonical sentence structures. Moreover, while ambiguous idioms seem to engage as any other kind of standing ambiguity the left superior and medial frontal region to induce search processes through conceptual space, opaque idioms seem to be parsed and rehearsed in Broca's region. By contrast, comprehension of canonical and unambiguous sentences appears to evoke exclusively the left superior and middle temporal cortex. It is concluded that immediate linguistic computations are functionally organized in a modular fashion, but their neural correlates are shared by different cognitive domains.

2007-11-27

Daniel Casasanto Stanford University

+ more

How do people transform experience into knowledge? This talk reviews a series of studies testing the hypothesis that our physical experiences in perception and motor action contribute to the construction of even our most abstract thoughts (e.g., thoughts about value, time, happiness, etc.) Further, these studies begin to distinguish the contributions of linguistic experience, cultural experience, and perceptuo-motor experience to the formation of concepts and word meanings. Some experiments show that people who talk differently think differently; others show influences of non-linguistic cultural practices on conceptual structure; others show that people with different bodies, who interact with their environments in systematically different ways, form dramatically different abstract concepts. These demonstrations of linguistic relativity, cultural relativity, and what I will call 'bodily relativity' highlight the diversity of the human conceptual repertoire, but also point to universals in the processes of concept formation.

2007-11-20

Interactions between word- and sound-based processes in multilingual speech production

Matt Goldrick

+ more

Interactive effects--where processing at one level is modulated by information encoded at another level--have been the focus of a great deal of controversy in psycholinguistic theories. I'll discuss new evidence from my laboratory examining interactions between word- and sound-level processes in multilingual speech production. These results demonstrate that whole-word properties (cognate status, lexicality) influence the processing of sound structure at both a categorical, segmental level as well at gradient, phonetic levels.

2007-11-13

Investigating situated sentence comprehension: evidence from event-related potentialsn

Pia Knoeferle

+ more

2007-11-06

Cross-linguistic investigation of determiner production

Xavier Alario

+ more

Language production is generally viewed as a process in which conceptual semantic messages are transformed into linguistic information. Such a description is probably appropriate for some aspects of the process (e.g. noun production), yet it is clearly incomplete.

Consider for instance the fact that in numerous languages determiner forms depend not only on semantic information but also on several other kinds of information. In Germanic, Slavic, and Romance languages, the retrieval of the determiners (and other closed- class words, such as pronouns) also depends on a property of the nouns called “grammatical gender.” For instance, in Dutch, nouns belong to the so-called “neuter” gender or to the “common” gender. The definite determiners accompanying the nouns belonging to the two sets are respectively het (e.g. het huis, ‘the house’) and de (e.g. de appel, ‘the apple’). In English, consonant-initial nouns and vowel-initial noun can require different indefinite article forms (e.g. a pear vs. an apple).

Such properties of determiners surely impose constraints on how these lexical items can be retrieved. For this very reason, determiners provide a broad testing ground for contrasting psycholinguistic hypothesis of lexical processing and grammatical encoding. In my talk, I will review the cross-linguistic research I have been conducting on determiner retrieval. One important question that will be asked, and only tentatively answered, concerns the extent to which open-class words such as nouns and closed-class words such as determiners are processed and selected by similar mechanisms.

2007-10-30

The development of word recognition: a cognitive control problem?

Sarah Creel

+ more

2007-10-23

Meaning & Motor Action: The role of motor experience in concept formation

Daniel Casasanto Stanford University, Department of Psychology

+ more

How do people transform experience into knowledge? This talk reviews a series of studies testing the hypothesis that our physical experiences in perception and motor action contribute to the construction of even our most abstract thoughts ( e.g., thoughts about value, time, happiness, etc.) Further, these studies begin to distinguish the contributions of linguistic experience, cultural experience, and perceptuo-motor experience to the formation of concepts and word meanings. Some experiments show that people who talk differently think differently; others show influences of non-linguistic cultural practices on conceptual structure; others show that people with different bodies, who interact with their environments in systematically different ways, form dramatically different abstract concepts. These demonstrations of linguistic relativity, cultural relativity, and what I will call 'bodily relativity' highlight the diversity of the human conceptual repertoire, but also point to universals in the processes of concept formation.

2007-10-16

Sign Language Surprises?

Susan Fisher

+ more

Until quite recently, most research on sign languages has been on those sign languages based originally in Europe, such as differences between Asian and Western sign languages in syntax, the use of prosody to convey syntactic distinctions, and especially word information. If time permits, I shall then return to the proposed commonalities and speculate on why they don’t seem to extend to so-called “village” sign languages.

2007-10-09

On words and dinosaur bones: Where is meaning?

Jeff Elman UC San Diego

+ more

Virtually all theories of linguistics and of language processing assume the language users possess a mental dictionary - the mental lexicon - in which is stored critical knowledge of words. In recent years, the information that is assumed to be packed into the lexicon has grown significantly. The role of context in modulating the interpretation of words has also become increasingly apparent. Indeed, there exists now an embarrassment of riches which threatens the representational capacity of the lexicon.

In this talk I will review some of these results, including recent experimental work from adult psycholinguistics and child language acquisition, and suggest that the concept of a lexicon may be stretched to the point where it is useful to consider alternative ways of capturing the knowledge that language users have of words.

Following an idea suggested by Dave Rumelhart in the late 1970s, I will propose that rather than thinking of words as static representations that are subject to mental processing-operands, in other words-they might be better understood as operators, entities that operate directly on mental states in what can be formally understood as a dynamical system. These effects are lawful and predictable, and it is these regularities that we intuitively take as evidence of word knowledge. This shift from words as operands to words as operators offers insights into a number of phenomena that I will discuss at the end of the talk.

2007-06-05

Pia Knoeferle

+ more

2007-05-29

Klinton Bicknell

+ more

2007-05-22

Adam Tierney

+ more

2007-05-15

Sarah Callahan

+ more

2007-05-08

Leah Fabiano

+ more

2007-05-01

Bob Slevc

+ more

2007-04-24

Arielle Borovsky

+ more

2007-04-17

Kim Plunkett

+ more

2007-04-10

Vic Ferreira

+ more

2007-03-13

Leah Fabiano

+ more

2007-03-06

Michael Ramscar

+ more

2007-02-27

Zenzi Griffin

+ more

2007-02-20

Henry Beecher

+ more

2007-02-13

Robert Kluender

+ more

2007-01-30

Hannah Rohde

+ more

2006-11-28

Learning and Liking a New Musical System/p>

Psyche Loui

+ more

One of the intriguing characteristics of human cognition is its tendency to make use of relationships between sounds. Sensitivity to sound patterns is especially important for the perception of language and music. While experiments on language have made use of many natural languages as well as some artificial languages, experiments investigating the learning of music to date have mostly relied on sounds which adhere to principles of Western music.

I will present several studies that investigate the learning of a novel system of musical sounds. The system is based on the Bohlen-Pierce scale, a microtonal system tuned differently from the traditional Western scale. Chord progressions and melodies were composed in this scale as legal exemplars of two sets of grammatical rules. Participants listened to melodies in one of the two grammars, and completed learning-assessment tests which include forced-choice recognition and generalization, pre- and post-exposure probe tone ratings, and subjective preference ratings. When given exposure to a small number of melodies, listeners recognized and preferred melodies they had heard, but when exposed to a sufficiently large set of melodies, listeners were able to generalize their recognition to previously-unencountered instances of the familiar grammar.

Event-Related Potentials in response to infrequent chords in the new musical system revealed a frontal Early Anterior Negativity (EAN) at 150-210ms, followed by a prefrontal Late Negativity (LN) at 400-600ms. These effects increased over the course of the experiment, and were dictated by the relative probability of the chords. Findings in the new musical system parallel those obtained in Western music and also predict individual differences in behavioral tests. We conclude that musical experience recruits a flexible set of neural mechanisms that can rapidly integrate sensory inputs into novel contexts. This rapid integration relies on statistical probabilities of sounds, and may be an important cognitive mechanism underlying music and language.

2006-11-21

Multiple logistic regression and mixed models/p>

Roger Levy & Florian Jaege

+ more

Multiple regression models are a generalization of ANOVAs. Modern variants of regression models (so-called mixed models) come with a number of advantages, such as scalability, increase in power, and less dependency on balanced designs. For example, standard ANOVAs require balanced designs, which often leads to very unnatural distributions of stimuli types within an experiment. Modern regression models can to some extent free researchers from these restrictions. Multiple regressions easily afford the inclusion of different kinds of independent variables (such as categorical and continuous variables) in the same analysis. In contrast to ANOVAs, relative effect sizes and directions of effects are directly evident from multiple regression outputs.

We give an introduction into multiple regression and mixed models (in the software package R). We use real psycholinguistic data samples and show step by step how the analysis are performed. The experimental data we use have categorical independent variables (such as priming data, answer accuracym multiple choice, etc.). Data of this kind is usually analyzed using ANOVAs over percentages. This is problematic for a couple reasons that we will discuss. We discuss the pros and cons of an alternative analysis, called logistic regression. Traditional logistic regression does not allow for the modeling of random subject or item effects. We show how modern statistical methods, such as logit mixed models or bootstrapping over subject and items address this challenge. In the course of this, we also go through some tools and visualization in R that we find particularly useful.

2006-11-14

Implicit learning of probabilistic distributions of structural events: Evidence from syntactic priming

Neil Snider & Florian Jaeger

+ more

Language users employ structural probabilities when processing utterances (see Pickering & van Gompel, 2006 for an overview). For example, the probability of a specific argument frame given a verb (henceforth VERB BIAS) affects comprehension and production. So language users' knowledge about verbs includes information about their biases (see also Stallings et al., 1998). This raises the question whether VERB BIASES are acquired once (e.g. during a critical period) or whether speakers keep learning VERB BIASES. We argue that a phenomenon known as syntactic priming yields evidence for the latter.

Syntactic priming (e.g. Bock, 1986) refers to the tendency of speakers to repeat abstract syntactic patterns. Consider the ditransitive alternation:

(1a ) We could give [physicals] [to the rest of the family members].
(NPPP)
(1b) We could give [the rest of the family members] [physicals].
(NPNP)

Speakers are more likely to choose the NPNP construction if they has been an NPNP construction in the preceding discourse (and, mutatis mutandis, for NPPP). Such syntactic priming has been attributed to implicit learning (Bock & Griffin, 2000, 2006; Ferreira, in progress). Implicit learning predicts that less frequent (and hence more surprising) events lead to more activation (and hence more learning). So, if speakers keep track of VERB BIAS, and if priming effects are in part due to this implicit learning, priming strength (i.e. the increase in likelihood that a prime and target have identical structures) should be inversely correlated with VERB BIAS.

Study 1 is a meta-analysis of five ditransitive priming experiments (Bock and Griffin, 2000 and Bock & Griffin, 2006). After exclusion of incomplete trials, the data consist of 8,212 prime-target trials. We find that the prime's VERB BIAS is inversely correlated with its priming strength. This effect is highly significant (p < .001) even after accounting for all factors from Bock & Griffin's (2000, 2006), as well as additional controls.

Study 2 replicates the effect for spontaneous speech. We use a database of 2,300 ditransitives extracted by Bresnan et al (2004; also Recchia et al., 2006) from the full Switchboard corpus (LDC, 1993). We find the predicted inverse effect of the prime's VERB BIAS to be highly significant (p < .001), even after controlling for other factors influencing the choice between NPPP and NPNP (Bresnan et al., 2004).

We conclude that priming strength is inversely related to the surprisal associated with the pime's structure (given the prime's VERB BIAS). Only implicit learning accounts of syntactic priming (Bock & Griffin, 2000) predict this relation. Our results also argue that speakers continuously 'keep track' of probabilistic distributions even of such fine-grained events as the conditional probability of a syntactic construction given the verb (VERB BIAS)

2006-11-07

All pronouns are not created equal: The processing and interpretation of null and overt pronouns in Spanish

Sarah Callahan

+ more

This study investigated the interpretation and processing of null and overt subject pronouns in Spanish. Participants completed an antecedent identification questionnaire and a word-by-word self-paced reading task. Both presented sentence pairs, the second of which contained an embedded clause. The number of possible referents was varied along with the form of the embedded subject pronoun. In the off-line questionnaire, the number and relative prominence of possible referents affected final interpretation, but the form of the pronoun had no effect. In contrast, in the on-line task, clauses with overt pronouns were read more slowly than those with null pronouns regardless of the number of possible referents. The analyses revealed that this effect was not immediate, but rather occurred later in the clause. Implications for models of the processing of co-reference are discussed.

2006-10-31

How do language users estimate probabilities?

Florian Jaeger & Roger Levy

+ more

There is considerable evidence that comprehension and production are (in part) probabilistic (Aylett & Turk, 2004; Gahl & Garnsey, 2004; Garnsey et al., 1997; Jurafsky et al., 2001; Staub & Clifton, 2006; Wasow et al, 2005). Little, however, is understood about the how and what of probabilistic language processing. In particular: (A) What type of information do language users consider when estimating probabilities? (Mitchell et al., 1995) (B) How local does this information have to be? (C) And, how fine-grained are the probabilistic events/units language users keep track of? We address these questions using corpus-based evidence from that-omission in non-subject-extracted relative clauses (NSRCs), where that is less likely for predictable NSRCs (Jaeger, 2006):

(1) [NP1 the words [PP to [NP songs [NSRC (that) she's listening to]]]

We introduce a two-step modeling approach to distinguish different theories of probability estimation. In the first step, we derive estimates of NSRC predictability based on different assumptions about how speakers track NSRC predictability. In the second step, we compare these different estimates with regard to how much of that-omission they account for (in a logistic regression model including other controls taken from Jaeger, 2006).

We find that speakers are sensitive to the predictability of fine-grained linguistic units, and they estimate predictability using detailed structural cues of the utterance. These cues don't have to be adjacent to the target event, but a lot of the information relevant to the estimation of probabilities seems to be relatively local to the target.

The results are further evidence for probabilistic syntactic production (Jaeger, 2006; also Stallings et al., 1998). They are also are modest steps towards a better understanding of probabilistic language processing. We present an interpretation of the data in form of Uniform Information Density (Levy & Jaeger, to appear): if speakers want to optimize their chance to be understood (while conserving effort), speakers should structure their utterances so as to avoid peaks and troughs in information density. We present preliminary evidence in favor of this view.

2006-10-24

Perseveration in English Comparative Production

Jeremy Boyd

+ more

An implicit assumption in many studies that make use of the elicited production methodology is that subjects’ responses reflect their true linguistic competence. The current work challenges this premise by looking at data on English comparative adjective acquisition that were collected using elicited production. We are interested in sequences like the following:

Trial Production
t-2 faster
t-1 older
t dangerouser

The specific question we ask is whether the child’s production of dangerouser on trial t reflects how the child actually thinks dangerous should be inflected for the comparative, or whether dangerouser is a perseveration of the -er pattern of inflection from trials t-1 and t-2. If the latter is true, then it would be infelicitous to conclude—as other researchers have (Graziano-King & Cairns, 2005; Gathercole, 1985)—that overuse of the -er pattern is evidence that children entertain an abstract ADJer mental representation.

We present evidence that bears on this issue from two sources.

First, we experimentally manipulated the temporal structure of the production task such that some subjects received back-to-back production trials, while others received production trials interspersed with trials in which a simple counting task was performed. If errors like dangerouser do result from perseveration, then their likelihood should be reduced when counting trials, which dramatically slow the pace of the task, are included. Second, we calculated a measure known as perseveration probability (Cohen & Dehaene, 1998) across all of the trials that our subjects participated in. This allows us to perform a number of analyses comparing perseveration probabilities across ages, experimental groups, and inflectional patterns. Preliminary results from these two sources of evidence will be discussed at the talk.

The question of whether the elicited production method is subject to perseveration effects is an important one. Our theories of competence, processing, and development are informed by the data that we collect. That these data may change according to how the method is applied suggests that our theories may also have to be adjusted accordingly. Additionally, at the clinical level, perseveration effects may cause some children to fail diagnostic tests of grammar and be labeled as language-impaired when they are, in fact, perfectly normal. Discovering how verbal perseveration works could be helpful in that it may pave the way for the construction of more effective diagnostic tools, which should result in fewer wasted resources.

2006-10-17

Roger Levy

+ more

Any theory of human syntactic processing must account for several crucial properties: our ability to effortlessly disambiguate highly ambiguous linguistic input; our ability to make inferences on the basis of incomplete inputs; and the fact that some parts of some sentences are more difficult for us to process than others. In psycholinguistics, the historically preeminent accounts of this last property have appealed primarily to resource limitations (e.g., Clifton & Frazier 1989, Gibson 1998): as a structural representation of the input is incrementally built, having to keep more partial structures in memory for a longer time is penalized. In this talk, however, I argue that an alternative, expectation-based account of syntactic processing -- where a comprehender's ability to predict an upcoming word is the chief determinant of the processing difficulty for that word -- is gaining support in a growing body of experimental results in the online processing of verb-final languages (e.g., Konieczny 2000, Vasishth 2002, Konieczny and Döring 2003) that is proving problematic for resource-limitation theories. I present a new information-theoretic derivation of the surprisal model of processing difficulty originally proposed by Hale (2001) that draws a close connection between the ideas of expectation and incremental disambiguation. I show that the surprisal model accounts for a variety of recent results in syntactic processing, including online processing of clause-final verbs (Konieczny 2000, Konieczny and Döring 2003) and V2 verbs (Schlesewsky et al. 2000) in German, subject-modifying relative clauses in English (Jaeger et al. 2005), and conditions under which syntactic ambiguity can facilitate comprehension (Traxler et al. 1998, van Gompel et al. 2001, 2005).

2006-10-10

Phonological representation in bilingual Spanish-English speaking children

Leah Fabiano

+ more

Paradis (2001) proposed the Interactional Dual Systems Model of bilingual phonological representation which posits separate, but non-autonomous systems of representation in bilingual children. This study attempted to provide evidence for interaction between the two systems of bilingual representation through the measurement of (1) accuracy of phonemes shared by English and Spanish and the accuracy of those phonemes specific to either language, (2) the predictive capability of frequency of occurrence of sounds in each language, a markedness variable, and (3) the amount and type of phonological cross-linguistic effects present in the speech of bilingual children. The main hypothesis of this study is if these interactive characteristics are observed in the speech of bilingual children, they may provide evidence for non-autonomy between the childÿÿs two phonological systems. Twenty-four typically-developing children, ages 3;0 to 4;0 were included in this study: Eight bilingual Spanish-English speaking children; eight monolingual Spanish speakers, and eight monolingual English speakers. Single word and connected speech samples were obtained for each child in each language. The first step in this series of analyses was to obtain descriptive information for each subject. Paired samples t-tests and Friedman tests were used to examine shared versus unshared phoneme accuracy. A One-Way ANOVA and a post hoc Tukey examining PCC by subject was performed in order to determine that the data could be collapsed. Correlations by subject, on PCC versus frequency were performed in order to determine the direction, p value, and the strength of the relationship. A Mixed Effects Regression analysis was then performed to determine if frequency was a significant predictor of shared PCC. Substitution errors of both the bilingual and monolingual speakers were examined to provide evidence for cross-linguistic effects. Results yielded that for bilingual speakers phoneme accuracy for shared elements was significantly higher than that of unshared elements, frequency did not demonstrate predictive capability on high phoneme accuracy, and cross-linguistic effects were evident in the English and Spanish productions of bilingual children, thus providing support for the IDSM.

2006-10-03

Language comprehension and processing in speakers of different varieties of English

Tim Beyer

+ more

Although African American English (AAE) and Standard American English (SAE), the standard variety of English in the US, share many phonological forms, the grammars can differ substantially. For example, SAE 3rd person singular present 's', future contracted 'll', and past allomorphs 't/d' do not regularly appear in the surface form of AAE. This, among other evidence, suggests that while these morphemes carry tense information in SAE, they may not in AAE. An important question therefore becomes how AAE-speakers interpret SAE tense and aspect morphology. Using off- and on-line (eye-tracking) measures, this project investigates how 1st and 2nd grade AAE- and SAE-speakers interpret SAE tense and aspect morphology. Results show global comprehension patterns that accord with differences in the morphological systems of the childrenÿÿs native varieties and suggest that 1st and 2nd grade children are capable of rapidly integrating temporal information, but only when it is part of their native language variety.

2006-06-06

Understanding Words in Context: What role for Left Inferior Prefrontal Cortex?

Eileen Cardillo

+ more

The ability to use contextual information to aide word recognition is a ubiquitous aspect of normal speech comprehension. However, evidence from semantic priming tasks suggest that this capacity breaks down differentially with certain forms of aphasia and/or left-hemisphere damage. In particular, it has been suggested that aphasic patients with damage to left inferior frontal areas may be particularly impaired in the bottom-up activation of word meanings on the basis of semantic context and those with lesions affecting left posterior-temporal areas may be especially impaired in more controlled aspects of lexical processing. I recently explored this hypothesis, and its alternatives, using an auditory sentence-priming task with 20 left-hemisphere damaged patients with a range of comprehension difficulty. I will present a preliminary analysis of their performance in this task as well as results from a Voxel-Based Lesion Symptom Mapping (VLSM) analysis of their behavior.

2006-05-30

Shannon Rodrigue

+ more

2006-05-23

Some Generalizations About Linguistic Generalization By Infants

LouAnn Gerken

+ more

One dimension on which more vs. less strongly constrained models of language acquisition vary is the amount of evidence required for a particular linguistic generalization. "Triggering" models require, in the limit, only a single datum to set an innate parameter, whereas less constrained models often arrive at a generalization by performing statistics over many exemplars from an input set. I will present data from research with 9- to 17-month-old infants 4-year-old children, which explores the amount and type of input required for learners to generalize beyond the stimuli encountered in a brief laboratory exposure. All of the studies suggest that generalization requires a minimal number of data points, but more than just one, and that different subsets of the input lead to different generalizations. Taken together, the data provide direction for examining the ways in which innate constraints and learning via statistics may combine in human language development.

2006-05-16

Elizabeth Redcay

+ more

The second year of life is a time of dramatic cognitive and social change. One of the most striking advances is in a child's language development. A typical 8 month old infant understands only a few words, the average 16 month old understands over 100 words, and the typical 24 month old understands many hundreds of words. The anatomical substrate for language processing during this time of rapid word learning remains unclear as there have been no functional magnetic resonance imaging (fMRI) studies of healthy, typically developing toddlers. The second year of life is also characterized by a marked absence of language growth in children with autism. Autism emerges during the first few years of life and is diagnosed in part by deficits in language and communication. Structural evidence shows brain differences from controls are greatest during this age.

However, no functional MRI data exist from young children with autism. In this talk, I will present an fMRI study examining passive speech comprehension in 10 typically developing toddlers (meanSD; 214 mo) and 10 typically developing older children (393 mo) during natural sleep. Our results from this study suggest that rapid language acquisition during the second year of life is not accounted for by classical superior temporal language areas alone, but instead appears to result from the utilization of frontal cortical functions as well as other brain regions.

Additionally, I will present preliminary fMRI data from young 2-3 year old children with autism who were presented with this same speech paradigm during natural sleep.

2006-05-09

Long Term Activation of Lexical and Sublexical Representations

Gedeon Deák Cognitive Science and Human Development

+ more

It is commonly believed that young children are precocious word-learners. It is less clear what this belief entails. Are children very good at learning new words? Compared to whom? Compared to what other type of information? If word-learning is specialized, how does it get that way? These and other questions began inconveniencing people (especially those who see language as a mystical ability) about 10 years ago.

The common view that children have special (fast) word-learning processes has only three problems: lack of evidence, disconfirming evidence, and faulty underlying logic. Other than that, it is difficult to disprove. Nevertheless, my students and I began several experiments to isolate what, if anything, is specialized about children’s word learning. We ran several experiments showing that the “mutual exclusivity” bias (i.e., the apparent tendency for children to reject a new word for something they can already name; Markman, 1994) is in fact a weak, transitory “fan effect” (Anderson, 1972) that is not specific to novel words. In the process, we unexpectedly found that 4- and 5-year-old children are actually slower to learn new words than new facts (even if novel words are embedded in the fact) or new pictorial symbols. This finding caused teeth-gnashing and hair-rending in reviewers. To ease their suffering, we started another experiment to try to replicate this “slow mapping” effect in young children. Preliminary results suggest that 3-year-olds are no faster, and perhaps a little slower, to learn pictograms than words. Four-year-olds show no difference. Both 3- and 4-year-olds learn new facts faster than new words, even though facts are more complex, and factors such as exposure, novelty, and phonological difficulty are precisely controlled (or disadvantageous for facts). The fact-advantage is seen in immediate and delayed (one week) memory tests. A related claim that children make more systematic generalizations from new words (Waxman & Booth, 2000, 2001; Behrend et al, 2001) was not confirmed.

I will describe these studies in more detail, discuss the implications of the results, and solicit feedback on ongoing or planned follow-up studies.

2006-05-02

Long Term Activation of Lexical and Sublexical Representations

Arthur Samuel (work done in collaboration with Meghan Sumner)

+ more

When a listener hears a word like "tape", current theories of spoken word recognition assert that recognition involves the activation of both lexical ("tape") and sublexical (e.g., /t/, /e/, /p/) representations. In contrast, when an unfamiliar utterance ("dape") is heard, no lexical representations can be settled on. Using a long-term priming paradigm, we examine whether representations remain active for at least 10-20 minutes. We approach this by examining lexical decision times for nonwords (e.g. "dape"), as a function of the words or nonwords heard 10-20 minutes earlier. We find that the time needed to identify a nonword as a nonword is delayed if a similar word was heard 10-20 minutes before; there is no such delay if the nonword itself had previously been heard. Conversely, nonword processing is faster if a similar (but not identical) nonword had been presented previously. The delay caused by prior word exposure suggests that the word's lexical representation remains active, and competes with the nonword during its recognition. This interference is found both for items sharing onsets ("flute-floose") and offsets ("tape-dape"). The equivalence of these two cases supports word recognition models in which a word's lexical neighborhood determines the set of lexical competitors. The enhanced processing of a nonword due to having heard a similar nonword supports the existence of sublexical (e.g., consonant-vowel, and vowel-consonant) units that can retain activation over a surprisingly long time period.

2006-04-25

Relationships between processing of meaningful linguistic and nonlinguistic sounds

Arielle Borovsky, Ayse Saygin, & Alycia Cummings

+ more

To what degree is the processing of language special? We present data from a large scale project that examines the behavioral correlates of nonlinguistic and linguistic comprehension in a number of patient populations. We report on data that examines the auditory comprehension of environmental and verbal sounds in a balanced task using the same verbal and nonverbal items. This test has been administered to a number of populations including: neurologically normal children, college students and elderly participants, children and adults with left and right hemisphere focal lesions, and children diagnosed with language impairment. In all cases, we fail to find behavioral dissociations between linguistic and nonlinguistic sound processing. These studies show that language is subserved at least in part by a domain-general system and shares processing and neural resources with other complex and overlearned multi-modal skills.

2006-04-18

Prosodic disambiguation of syntactic structure: For the speaker or for the addressee?

Tanya Kraljic

+ more

Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so ''for'' their addressees (the audience design hypothesis) or ''for'' themselves, as a by-product of planning and articulating utterances. Three experiments addressed these issues. In Experiments 1 and 3, speakers followed pictorial guides to spontaneously instruct addressees to move objects. Critical instructions (e.g., ''Put the dog in the basket on the star'') were syntactically ambiguous, and the referential situation supported either one or both interpretations. Speakers reliably produced disambiguating cues to syntactic ambiguity whether the situation was ambiguous or not. However, Experiment 2 suggested that most speakers were not yet aware of whether the situation was ambiguous by the time they began to speak, and so adapting to addressees' particular needs may not have been feasible in Experiment 1. Experiment 3 examined individual speakers' awareness of situational ambiguity and the extent to which they signaled structure, with or without addressees present. Speakers tended to produce prosodic cues to syntactic boundaries regardless of their addressees' needs in particular situations. Such cues did prove helpful to addressees, who correctly interpreted speakers instructions virtually all the time. In fact, even when speakers produced syntactically ambiguous utterances in situations that supported both interpretations, eye-tracking data showed that 40% of the time addressees did not even consider the non-intended objects.

2006-04-11

Speakers' control over leaking private information

Liane Wardlow Lane, Michelle Groisman & Victor S. Ferreira

+ more

Past research demonstrates that speakers sometimes make references to privileged objects (objects known only to them) when naming mutually visible objects (Horton & Keysar, 1996; Nadig & Sedivy, 2002; Wardlow & Ferreira, 2003). For example, Wardlow and Ferreira (2003) report a task where speakers and addressees were presented with four cards each depicting a simple object. Both could see the same three objects (i.e., a circle, a square, and a triangle), but the speaker could see an additional, privileged object (a smaller triangle). Speakers were asked to identify one of the mutually visible objects (the target) for the addressee. When asked to identify the triangle, speakers should have said "triangle." However, they often said "large triangle", as if they failed to account for perspective differences. Interestingly, such utterances serve to implicitly leak extra information. Here, "large triangle" conveys that the speaker can also see another, smaller triangle. But can speakers avoid communicating implicit information when doing so conflicts with their goals?

We used a referential communication task like that described above. On test trials, the privileged object was the same as the target object but differed in size, whereas on control trials, the privileged object was distinct. In the baseline block, speakers were simply asked to name a target. In conceal blocks, participants were given additional instructions that encouraged speakers to hide the identity of the foil when identifying the target. Specifically, after addressees selected the target, they could guess the identity of the privileged object. Speakers and addressees kept scores; a correct guess gave addressees an additional point. Thus, speakers were provided with both incentive and instruction to conceal the identity of the privileged object. If speakers can control leaking information, then the conceal instruction should reduce modifier use relative to baseline performance.

Results showed that on test trials, speakers used modifying adjectives more in the conceal condition (14.4%) than in the baseline condition (5.4%). Speakers rarely used modifying adjectives in the control conditions (1.4% and .5%) Thus, the instruction to conceal privileged information made speakers refer to it even more; this is likely because the instruction to conceal privileged objects served to make them highly salient, and the production system had a difficult time blocking the intrusion of such information. These results localize perspective-taking errors to a stage of processing, grammatical encoding, that is outside speakers' executive control. Additionally, the results suggest not only that leaked information may be information speakers want to keep private, but that attempts to conceal it might make its leakage even more likely. If so, these results are likely to be relevant to interactions involving everything from interpersonal interactions to adversarial negotiation.

2006-03-14

The Face of Bimodal Bilingualism

Jennie Pyers

+ more

Research with bilinguals indicates that the lexicons of both languages are active even during language-specific production. However, it is unclear whether the grammars of both languages are similarly active. For bimodal (sign-speech) bilinguals, the articulators of their two languages do not compete, enabling elements of ASL to appear during English production. Because ASL uses grammatical facial expressions to mark structures like conditionals and wh-questions--raised brows and furrowed brows respectively--we hypothesized that these nonmanual markers might easily be produced when bimodal bilinguals speak English.

12 bimodal bilinguals and 11 non-signing English speakers were paired with non-signing English speakers. We additionally paired the same 12 bimodal bilinguals with a Deaf native signer to elicit the same structures in ASL. We elicited conditional sentences by asking participants to tell their interlocutor what they would do in 6 hypothetical situations. Wh-questions were elicited by having participants interview their interlocutor to find out 9 specific facts. We recorded and coded the facial expressions that co-occurred with the spoken English sentences.

For bimodal bilinguals, there was no difference between the proportion of conditionals produced with raised brows in the ASL and English conditions. We observed a significant difference between the bimodal bilinguals and the non-signers in the proportion that occurred with a raised brow. And the bimodal bilinguals timed the raised brow with the onset of the conditional clause, indicating that these raised brows were grammatical and those produced by the non- signers were gestural. The fact that the non-signers frequently produced a raised brow with conditionals points to the co-speech gestural origins of the conditional non-manual.

When producing English wh-questions, the bimodal bilinguals produced furrowed brows significantly less often than they did for ASL wh- questions, but significantly more often than the non-signers, who rarely furrowed their brows. Because the bimodal bilinguals did not completely suppress ASL grammatical facial expressions while speaking English, we conclude that both languages are simultaneously active in the bilingual brain.

While speaking English, bimodal bilinguals produced the wh-nonmanual less frequently than the conditional nonmanual. We argue that this difference arises from competition with affective and conversational facial expressions. Raised brows for non-signers carry positive affect and indicate an openness to communicate (Janzen & Shaeffer, 2002; Stern, 1977). The facial grammar of ASL conditionals would not affectively compete with this co-speech facial gesture. The furrowed brow is a component of the anger expression and the puzzled expression, and could be misinterpreted by non-signers (Ekman, 1972). As a result, bimodal-bilinguals produce the ASL facial grammar with English wh-questions less often.

This study illuminates the gestural origins of ASL nonmanual markers, informs current accounts of ASL facial grammar, and reveals the impact of modality on the nature of bilingualism.

2006-03-07

Optionality in Comparative Production

Jeremy Boyd & Bob Slevc

+ more

Why do grammatical options exist in a language? Having to choose between different ways of expressing a given meaning (e.g., the dative alternation, or -er versus more comparatives) might make production and comprehension more difficult. Alternatively, grammatical options might offer certain advantages (Bock, 1982). Corpus analyses by Mondorf (2003) found that, for adjectives that alternate in comparative form (e.g. angrier ~ more angry), the more variant tends to occur more often in syntactically complex environments. Mondorf explains this pattern of results by making the following claims:

(1) The distribution of -er and more comparatives is due to processing considerations.
(2) Speakers increase use of the more variant in syntactically complex environments to help listeners.
(3) Use of more helps listeners by simplifying parsing, and acting as a conventionalized warning of upcoming complexity.

These arguments, however, deserve closer scrutiny. First, corpus data is not ideally suited to making claims about processing. Second, while it is perfectly reasonable to assume that speakers might choose between linguistic alternatives based on a consideration of listener needs (Temperley, 2003), it may also be that speakers choose between options based on their own processing demands, and not on listener-based factors (Ferreira & Dell, 2000). The current set of experiments used an elicited production methodology to address the following issues:

(A) Whether speakers do, in fact, choose between morphological alternatives based on processing factors.
(B) Which kinds of processing complexities might be relevant to the choice between -er and more.
(C) Whether speakers choices are based on listeners needs, or the demands of their own production processes.

2006-02-28

Foveal splitting causes differential processing of Chinese orthography in the male and female brain - Computational, behavioural, and ERP explorations

Janet Hsiao

+ more

In Chinese orthography, a dominant structure exists in which the semantic information appears on the left and the phonetic information appears on the right (SP characters); the opposite structure also exists, with the semantic information on the right and the phonetic information on the left (PS characters). Recent research on foveal structure and reading suggests that the two halves of a centrally fixated character may be initially projected and processed in different hemispheres. Hence, Chinese SP and PS characters may have presented the brain with different processing problems.

In this talk, I will present three studies examining the interaction between foveal splitting and structure of Chinese SP and PS characters. In computational modelling, we compared the performance of a split-fovea architecture and a non-split architecture in modelling Chinese character pronunciation. We then examined the predictions from the two models with a corresponding behavioural experiment and an ERP study. We showed that SP and PS characters create an important opportunity for the qualitative processing differences between the two cognitive architectures to emerge, and that the effects of foveal splitting in reading extend far enough into word recognition to interact with the gender of the reader in a naturalistic reading task.

2006-02-21

Grammatical and Coherence-Based Factors in Pronoun Interpretation

Laura Kertz

+ more

We describe pronoun interpretation experiments in which a Coherence Hypothesis is tested against three preference-based systems of pronoun interpretation: the Subject Preference Hypothesis, the Parallel Structure Hypothesis, and the Modified Parallel Structure Hypothesis. We demonstrate that 'preferences' can be systematically disrupted through the manipulation of coherence, and that only the Coherence Hypothesis can predict the full range of co-reference patterns observed.

2006-02-07

Deciphering the Architecture of the Spoken Word Recognition Systeme

Arty Samuel

+ more

Most current models of spoken word recognition assume that there are both lexical and sublexical levels of representation for words. The most common view is that speech is initially coded as sets of phonetic features, with some intermediate recoding (e.g., phonemes) before it is mapped onto lexical representations. There is a longstanding debate about whether the information flow through such an architecture is entirely bottom-up, or whether there is also top-down communication from the lexical level to the phonemic codes.

The selective adaptation procedure offers a particularly effective way to address this debate, because it provides a test that relies on the consequences of top-down lexical effects, rather than on a direct subjective report. Three sets of experiments use this approach to decipher the word recognition system's architecture. One set uses lexically-based phonemic restoration to generate the adapting sounds, and a second set uses a similar approach based on the "Ganong" effect. The third set extends this approach to audiovisual lexical adaptation, combining the technique with a "McGurk" effect manipulation. Collectively, the studies clarify how visual and auditory lexical information are processed by language users.

2006-01-31

Gap-filling vs. filling gaps: An ERP study on the processing of subject vs. object relative clauses in Japanese

Mieko Ueno

+ more

Using event-related brain potentials (ERPs), we investigated the processing of Japanese subject/object relative clauses (SRs /ORs). English ORs take longer to read (King & Just, 1991), increase PET/fMRI activation (Just, et al. 1996; Caplan et al., 2000, 2001), and elicit left-lateralized/bilateral anterior negativity (LAN) between fillers and gaps (King & Kutas, 1995), which is largely attributed to a longer filler-gap distance. Contrarily, gaps in Japanese relative clauses precede their fillers, and the linear gap-filler distance is longer in SRs than in ORs. Nevertheless, Japanese ORs take longer to read (Ishizuka et al., 2003; Miyamoto & Nakamura, 2003), perhaps because in both English and Japanese, ORs involve a longer structural filler-gap/gap-filler distance in their syntactic representations (O'Grady, 1997). We investigated how gap-filler association in Japanese would compare to filler-gap association in English, and whether it is linear or structural distance that determines comprehension difficulty. Stimuli included SRs/ORs transliterated as:

SR [__new senator-A attacked] reporter-D-T long-term colleague-N existed OR [ new senator-N__attacked] reporter-D-T long-term colleague-N existed

'The reporter [who __ attacked the new senator]/[who the new senator attacked __ ] had a long-term colleague'

ORs in comparison to SRs elicited frontal negativity at the embedded verb and head-noun regions, and long-lasting centro-posterior positivity starting at the head-noun. The former may indicate that both storage and subsequent retrieval of a filler are associated with LAN (Kluender & Kutas, 1993), and the latter may index syntactic integration costs of a filler (Kaan et al., 2000), suggesting similar parsing operations for filler-gap/gap-filler dependencies. Finally, our data are better correlated with structural rather than linear distance.

2006-01-24

Perceptual learning for speakers?

Tanya Kraljic

+ more

Listeners are able to quickly and successfully adapt to variations in speaker and in pronunciation. They are also able to retain what they have learned about particular speakers, and rapidly access that information upon encountering those speakers later. Recent research on perceptual learning offers a possible mechanism for such adaptations: it seems that listeners accommodate speakers' pronunciations by adjusting their own corresponding phonemic categories (Norris, McQueen & Cutler, 2003). Such adjustments can be retained for at least 25 minutes, even with intervening speech input (e.g., Kraljic & Samuel, 2005).

However, the specificity of perceptual learning with respect to particular speakers (and consequently, its implications for linguistic representation or organization) is not yet clear. Might particular perceptual information be preserved with respect to higher-level information about speaker identity, or do the adjustments rely on acoustic details? What happens when different speakers pronounce the same sound differently? Conversely, what happens when a sound is pronounced in the same 'odd' way but for different reasons (e.g., due to some idiosyncrasy of the speaker versus due to a dialectal change)? I will describe findings from a program of research that investigates these questions and others. I will also discuss how perceptual adjustments may or may not translate to adjustments in production, which often serve quite a different functional role than perceptual adjustments do.

2006-01-17

Thematic Role and Event Structure Biases in Pronoun

Interpretation Hannah Rohde (joint work with Andy Kehler and Jeff Elman)

+ more

The question of whether pronouns are interpreted based primarily on surface-level morphosyntactic cues (subjecthood, recency, parallelism) or as a byproduct of deeper discourse-level processes and representations (inference, event structure) remains unresolved in the literature. These two views come together in a sentence-completion study by Stevenson et al. (1994), in which ambiguous subject pronouns in passages such as (1) were resolved more frequently to the (to-phrase object) Goal of a previous transfer-of-possession event rather than the (matrix subject) Source.

(1) John handed the book to Bob. He _________.

Stevenson et al. considered two explanations for this result: a thematic role bias for Goals over Sources, and an event-structure bias toward focusing on the end state of such events. To distinguish these hypotheses, we ran an experiment that compared the perfective ("handed") and imperfective ("was handing") forms of the transfer verb. The thematic role relations are equivalent between the two versions, but the imperfective, by describing an event as an ongoing process, is incompatible with a focus on the end state of the event. We found significantly more resolutions to the Source for the imperfective passages as compared to the perfective ones, supporting the event-structure explanation. Our results show that participants' interpretations of the ambiguous pronouns appear to reflect deeper event-level biases rather than superficial thematic role preferences. These findings will be presented within a broader model of discourse coherence and reference.

2006-01-10

The Face of Bimodal Bilingualism

Jennie Pyers

+ more

Research with bilinguals indicates that the lexicons of both languages are active even during language-specific production. However, it is unclear whether the grammars of both languages are similarly active. For bimodal (sign-speech) bilinguals, the articulators of their two languages do not compete, enabling elements of ASL to appear during English production. Because ASL uses grammatical facial expressions to mark structures like conditionals and wh-questions--raised brows and furrowed brows respectively--we hypothesized that these nonmanual markers might easily be produced when bimodal bilinguals speak English.

12 bimodal bilinguals and 11 non-signing English speakers were paired with non-signing English speakers. We additionally paired the same 12 bimodal bilinguals with a Deaf native signer to elicit the same structures in ASL. We elicited conditional sentences by asking participants to tell their interlocutor what they would do in 6 hypothetical situations. Wh-questions were elicited by having participants interview their interlocutor to find out 9 specific facts. We recorded and coded the facial expressions that co-occurred with the spoken English sentences.

For bimodal bilinguals, there was no difference between the proportion of conditionals produced with raised brows in the ASL and English conditions. We observed a significant difference between the bimodal bilinguals and the non-signers in the proportion that occurred with a raised brow. And the bimodal bilinguals timed the raised brow with the onset of the conditional clause, indicating that these raised brows were grammatical and those produced by the non- signers were gestural. The fact that the non-signers frequently produced a raised brow with conditionals points to the co-speech gestural origins of the conditional non-manual.

When producing English wh-questions, the bimodal bilinguals produced furrowed brows significantly less often than they did for ASL wh- questions, but significantly more often than the non-signers, who rarely furrowed their brows. Because the bimodal bilinguals did not completely suppress ASL grammatical facial expressions while speaking English, we conclude that both languages are simultaneously active in the bilingual brain.

While speaking English, bimodal bilinguals produced the wh-nonmanual less frequently than the conditional nonmanual. We argue that this difference arises from competition with affective and conversational facial expressions. Raised brows for non-signers carry positive affect and indicate an openness to communicate (Janzen & Shaeffer, 2002; Stern, 1977). The facial grammar of ASL conditionals would not affectively compete with this co-speech facial gesture. The furrowed brow is a component of the anger expression and the puzzled expression, and could be misinterpreted by non-signers (Ekman, 1972). As a result, bimodal-bilinguals produce the ASL facial grammar with English wh-questions less often.

This study illuminates the gestural origins of ASL nonmanual markers, informs current accounts of ASL facial grammar, and reveals the impact of modality on the nature of bilingualism.

2005-11-29

Rachel Mayberry

+ more

How does the timing of language acquisition constrain its ultimate outcome? In series of experiments we have found that linguistic experience in early childhood affects subsequent language processing and learning ability across modalities and languages. Specifically, adults who acquired a language in early life can perform at near-native levels on subsequently learned, second languages regardless of whether they are hearing or deaf or whether their early language was signed or spoken.

By contrast, a paucity of language in early life leads to weak language skill in adulthood across languages and linguistic structures as shown by a variety of psycholinguistic tasks, including grammatical judgment, picture-to-sentence matching, lexical access, and reading comprehension. These findings suggest that the onset of language acquisition during early human development dramatically alters the both capacity to learn and process language throughout life, independent of the sensory-motor form of the early experience.

2005-11-22

Motor learning as applied to treatment of neurologically based speech disorders

Don Robin

+ more

This seminar will provide an overview of principles of motor learning with special reference to speech motor learning in adults and children with apraxia of speech. In particular, I will present an overview of a number of studies in our laboratory and how they fit with the broader literature on motor learning.

2005-11-15

External/Internal status neither explains the frequency of occurrence nor the difficulty of comprehending reduced relative clauses

Mary Hare/Ken McRae

+ more

McKoon and Ratcliff (Psychological Review, 2003) argue that reduced relatives like The horse raced past the barn fell are incomprehensible because the meaning of the RR construction requires a verb with an event template that includes an external cause (EC). Thus, reduced relatives with internal cause (IC) verbs like race are "prohibited". Their corpus analyses showed that reduced relatives are common with EC but rare with IC verbs.

Alternatively, RRs may be rare with IC verbs because few of these occur in the passive. Those that do, however, should make acceptable RRs, with ease of comprehension related to difficulty of ambiguity resolution rather than the IC/EC distinction. In two experiments, we show that English speakers willingly produce RRs with IC verbs, and judge their acceptability based on factors known to influence ambiguity resolution. Moreover, a regression model on our own corpus data demonstrates that frequency of passive, not IC/EC status, predicts RR frequency in parsed corpora. In summary, although there do exist reasons why the IC/EC distinction may be important for language use, this dichotomous distinction does not explain people's production or comprehension of sentences with reduced relative clauses. In contrast, factors underlying ambiguity resolution do.

2005-11-08

Presentation on Wernicke's Aphasia

Nina Dronkers

+ more

This presentation is the second in a series of talks on aphasia, a disorder of language due to injury to the brain. This presentation will concern Wernicke's aphasia, the type of aphasia that affects the lexical-semantic system without affecting motor speech production. An individual with Wernicke's aphasia has kindly agreed to be interviewed in front of the audience, and will teach us, first-hand, about the effects of brain injury on the language system. This interview will be followed by a lecture on Wernicke's aphasia as well as its relationship to Wernicke's area of the brain. In addition, Wernicke's aphasia will be discussed in relation to semantic dementia, a neurodegenerative disorder that is often confused with Wernicke's aphasia.

2005-10-25

Mechanisms for acoustic pattern recognition in a song bird

Timothy Gentner

+ more

The learned vocal signals of song birds are among the most complex acoustic communication signals, and offer the opportunity to investigate perceptual and cognitive mechanisms of natural stimulus processing in the context of adaptive behaviors. European starlings sing long, elaborate songs composed of short spectro-temporally distinct units called "motifs". I review studies that point out the critical importance of motifs in the song recognition, and then show how experience dependent plasticity acts to modify the single neuron and ensemble level representation of motifs in starlings that have learned to recognize different songs. Beyond the recognition of spectro-temporal patterning at the motif level, starlings also attend to statistical regularities in the sequential patterning of motifs within songs. Recent results demonstrate that starlings can learn to use arbitrary rules that describe the temporal patterning of motif sequences, including at least one rule that meets the formal definition of a non-regular context-free grammar -- an ability hypothesized as uniquely human. I discuss these data in the context of comparative models for vocal pattern recognition and syntactic processing.

2005-10-18

Eileen Cardillo

+ more

2005-10-11

Noriko Hoshino

+ more

2005-10-04

Mechanisms for acoustic pattern recognition in a song bird

Timothy Gentner

+ more

The learned vocal signals of song birds are among the most complex acoustic communication signals, and offer the opportunity to investigate perceptual and cognitive mechanisms of natural stimulus processing in the context of adaptive behaviors. European starlings sing long, elaborate songs composed of short spectro-temporally distinct units called "motifs". I review studies that point out the critical importance of motifs in the song recognition, and then show how experience dependent plasticity acts to modify the single neuron and ensemble level representation of motifs in starlings that have learned to recognize different songs. Beyond the recognition of spectro-temporal patterning at the motif level, starlings also attend to statistical regularities in the sequential patterning of motifs within songs. Recent results demonstrate that starlings can learn to use arbitrary rules that describe the temporal patterning of motif sequences, including at least one rule that meets the formal definition of a non-regular context-free grammar -- an ability hypothesized as uniquely human. I discuss these data in the context of comparative models for vocal pattern recognition and syntactic processing.

2005-05-31

Gestures worth a thousand words: Commonalities and differences in gesture and picture comprehension.

Ying Wu

+ more

Conversation is frequently accompanied by gestures which depict visuo-semantic features related to the content of the talk in progress. Does the capacity to construct meaning through gesture engage processes and neural substrates similar to those recruited in the comprehension of image-based representations of the visual world? This talk will compare event related potentials (ERPs) elicited by photographs of common objects and iconic co-speech gestures. Previous research has demonstrated that the second member of an unrelated picture pair results in an enhanced negative going deflection of the ERP waveform (N400) as compared to responses elicited by related picture probes. An earlier negative going component . the N300 . has also been found to exhibit sensitivity to manipulations of semantic relatedness. If the comprehension of pictures and gestures is mediated by overlapping systems, similarly distributed effects of congruency on the N300 and N400 components should be observed.

These predictions were addressed by extracting still images from videotaped segments of gestures in order to elicit brain responses comparable to those elicited by pictures. 16 healthy adults viewed contextually congruous and incongruous gesture stills, dynamic gestures, and photographs of common objects. N400 effects were observed in response to static and dynamic gestures, as well as pictures. Static gesture stills and pictures also elicited N300 effects with similar distributions, suggesting overlap in the systems mediating some aspects of gesture and picture comprehension. However, differences in the overall morphology of ERP waveforms suggest non-identical neural sources as well.

2005-05-24

How adults and children detect meaning from words and sounds: An ERP study

Alycia Cummings

+ more

This study examined differences in neural processing of meaningful (words and natural sounds) vs. non-meaningful (sounds) information and of meaningful information presented in the form of words vs. natural sounds. Event-related potentials (ERP) were used to obtain precise temporal information. Action-related object pictures were presented with either a word or natural sound, and non-meaningful drawings were paired with non-meaningful sounds. The subjects pressed a button indicating whether the picture and sound matched or mismatched. Non-meaningful stimuli were matched by “smoothness”/”jaggedness”.

In both adults and children, words and environmental sounds elicited similar N400 amplitudes, while the non-meaningful sounds elicited significantly smaller N400 amplitudes. While there were no left hemisphere differences, the right hemisphere neural networks appeared to be more active during environmental sound processing than during word processing. The meaningful sounds showed similar scalp distributions, except at the most anterior electrode sites, In adults the environmental sound N400 latency was significantly earlier than the word latency, while there were no reaction time differences.

As compared to the non-meaningful stimuli, meaningful sounds and words elicited widespread activation, which might reflect neural networks specialized to process semantic information. However, there was some evidence for differences in neural networks processing lexical versus meaningful, non-lexical input.

2005-05-17

"This is a difficult subject"

Masha Polinsky and Robert Kluender

+ more

Subject-object asymmetries are well documented in linguistic theory. We review a variety of evidence from child language acquisition, normal adult sentence processing, language and aging studies, and cross-linguistic patterns supporting the notion that subjects present particular difficulties to language users and are therefore in a class by themselves. Using notions from information structure and judgment types (thetic vs categorical), we explore some avenues for addressing the intrinsic difficulty of subjects.

2005-05-10

Is there a processing advantage for analytic morphology? Evidence from a reading-time study of English comparatives

Jeremy Boyd

+ more

Is there a processing advantage for analytic morphology? Evidence from a reading-time study of English comparatives

In English, adjectives can be inflected for comparison in two different ways: through '-er' suffixation (bigger, happier), or via syntactic combination with 'more' (more belligerent, more romantic). The first option--where the comparative occurs as a single word--is referred to as SYNTHETIC MORPHOLOGY. The second option--in which the comparative is realized as multiple words--is called ANALYTIC MORPHOLOGY. There is some reason to believe that analytic realization confers certain advantages that synthetic realization does not. Creoles, for example, tend to favor analytic morphology (Bickerton, 1981; 1984). Some researchers claim that this fact indicates that analytic morphology is inherently easier to handle.

Mondorf (2002; 2003) developed a specific hypothesis along these lines. In corpora analyses of adjectives that fluctuate between synthetic and analytic versions (e.g. prouder ~ more proud, crazier ~ more crazy), she found that the presence of complex syntactic environments immediately following the comparative--e.g. to-complements, as in "Some news items are more fit to print than others"--seemed to trigger the analytic variant. Mondorf argues that the analytic version is favored in these circumstances because it helps to mitigate complexity effects. She acknowledges, however, that "there is no independent empirical evidence that the analytic variant serves as a signal foreshadowing complex structures, is easier to process or [is] in other ways more suited to complex environments" (2003: 253).

In the present talk, I present results from a self-paced reading-time study that bear on these issues. Subjects were asked to read sentence pairs like the following:

Analytic Condition: Highway 95 is more pleasant TO drive during the summer months.
Synthetic Condition: Highway 95 is pleasanter TO drive during the summer months.

Reading times were recorded and compared across Analytic and Synthetic conditions to see whether there was a facilitated reading time for 'to' (in CAPS, above) when a 'more' comparative was used. Analysis shows that this was indeed the case. Whether this result really indicates an analytic processing advantage--versus an effect of grammaticality and/or frequency--will be addressed.

2005-05-03

Cross-Category Ambiguity and Structural Violations: Why "Everyone Likes to Glass" and "Nobody Touches the Agree"

Ryan Downey

+ more

Previous research suggests that violations during sentence processing may result in characteristic Event-Related Potential (ERP) patterns. One particular component, the Early Left Anterior Negativity (ELAN), has been elicited primarily in German after phrase structure violations with the following form:

1 Das Baby wurde gef?ttert.
The baby was fed.

2 *Die gans wurde im gef?ttert.
*The goose was in-the fed.

Friederici et al. use the elicitation of an ELAN in phrase structure violations such as this (i.e., the reader encounters a verb when expecting a noun) as evidence that the brain is sensitive to syntactic information via an extremely early (100-250 msec) first pass parse.

To investigate what types of information the parser may be sensitive to, the present study investigated phrase structure violations during auditory processing of English sentences. Stimuli were constructed that were category-unambiguous (i.e., could only be a noun vs. could only be a verb) and frequency-biased category-ambiguous (i.e., could be used as a noun or verb, but exhibited "preference" for one). Initial results suggested an early frontal negativity to unambiguous phrase structure violations, but only when listeners heard a noun when they were expecting a verb (the opposite structure than that studied by Friederici et al.); the other violation - hearing a verb when expecting a noun - resulted in an unanticipated early (mostly) left anterior positivity. There were no significant ERP differences in the word-category ambiguous "violations". Post-hoc comparisons taking into account word concreteness yielded a potential explanation for the unpredicted initial findings. Possible alternative interpretations will be discussed. Results indicate that ERPs
are useful in investigating the processing of various types of information during phrase structure violations in English.

2005-04-26

Individual differences in second language proficiency:
Does musical ability matter?

Bob Slevc

+ more

This study examined the relationship between musical ability and second
language (L2) proficiency in adult learners. L2 ability was assessed in four
domains (receptive phonology, productive phonology, syntax, and lexical knowledge), as were various other factors that might explain individual differences in L2 ability, including age of L2 immersion, patterns of language use and exposure, phonological short-term memory, and motivation. Hierarchical regression analyses were conducted to determine if musical ability explains any unique variance in each domain of L2 ability after controlling for other relevant factors. Musical ability predicted ability with L2 phonology (both receptive and productive) even when controlling for other factors, but did not explain unique variance in L2 syntax or lexical knowledge. These results suggest that musical skills can supplement the acquisition of L2 phonology and add to a growing body of evidence linking language and music.

2005-04-19

CRL Research

Masha Polinsky and Vic Ferreira

+ more

Masha Polinsky and Vic Ferreira will be talking about research that is currently underway at CRL.

2005-03-08

Processing focus violations: Comparing ERP and eye-tracking data

Wind Cowles

+ more

Processing focus violations: Comparing ERP and eye-tracking data The linguistic focus in an answer to a wh-question must correspond to the wh-phrase in the question. When focus is mis-assigned and this correspondence is not possible, the answer becomes infelicitous, even when it provides the information asked for by the question. An example of this can be seen in (1), with the focus of the answers indicated by all caps:

(1) Who did the queen silence, the banker or the advisor?
a. It was the BANKER that the queen silenced.
b. #It was the QUEEN that silenced the banker.

In this talk I'll address how comprehenders respond to the kind of focus violation shown in (1b) by presenting the results of experiments using ERP and eye-tracking methodologies. The results of these studies provide converging evidence that such violations are treated by comprehenders as essentially semantic in nature. I will discuss these results in terms of (a) how comprehenders use focus information during processing and (b) the additional information that such direct comparison of ERP and eye-tracking data can provide.

2005-02-22

Processing and syntax of control structures in Korean

Nayoung Kwon & Maria Polinsky

+ more

Korean shows the following productive alternation in object control:

i. John-NOM Maryi-ACC [ei to leave] persuaded
ii. John-NOM ei [Maryi-NOM to leave] persuaded

Primary linguistic data (Monahan 2004) indicate that (ii) must be analyzed as a form of backward object control (BC). This study was designed to look for processing evidence supporting the BC analysis.

Previous experimental studies have shown that cataphoric relations take longer to process than anaphoric relations (Gordon & Hendrick 1997, Sturt 2002, Kazanina & Phillips 2004). This predicts that BC (1b) should elicit slower reading time (RT) than forward control (FC, 1a).

(1) ‘The marketing department of the production persuaded the heroine to appear on a popular talk show to advertise the movie.’
(a) W7heroinei-acc [ei W8popular W9talk_show-to W10go-comp] W11persuaded FC
(b) ei W7[heroinei-nom W8popular W9talk_show-to W10go-comp] W11persuaded BC
(c) [ei W7popular W8talk_show-to W9go-comp]j W10heroinei-acc tj W11persuaded scrambled FC

To test these predictions, a self-paced reading time (RT) study of Korean control was conducted using FC (1a), BC (1b), and arguably scrambled FC (1c) (n=40, each type, 23 subjects). At words 7 and 10, FC (1a) was processed significantly faster than BC (1b). Because of word order differences in scrambled FC (1c), RT from W7 to W10 was collapsed; RT in that region was greater for BC than for both FC types. The difference between scrambled and unscrambled FC was non-significant.

These results provide experimental evidence for the psychological reality of backward control. While slower RT at W7 may be due either to clause-boundary effects (Miyamoto 1997), the effect at W10 is unambiguously due to BC, as the parser back-associates the overt controller with the gap.

Control as A-movement: Evidence from the processing of forward and backward control in Korean.

2005-02-15

Imitation and language learning

Michael Ramscar

+ more

In this talk I'll present a series of studies from my lab showing that children can master irregular plural forms simply by repeating erroneous over-regularized versions of them. We model and predict this phenomenon in terms of successive approximation in imitation - children produce over-regularized forms because the representations of frequent, regular items develop more quickly, such that at the earliest stages of production they interfere with children's attempts to imitatively reproduce irregular forms they have heard in the input. As the strength of the representations that influence children's productions settle asymptotically, the early advantage for frequent forms is negated, and children attempts to imitate the forms they have heard are probabilistically more likely to succeed (a process that produces the classic U-shape of children's acquisition of inflection). These data show that imitation allows children to acquire correct linguistic behavior in a situation where, as a result of philosophical and linguistic analyses, it has often been argued that it is logically impossible for them to do so. Depending on how time is going, I'll then discuss how imitation allows signing children to "invent language", why more imitation might help adults better learn a second languages, and other primates a first.

2005-02-08

Grant Goodall Department of Linguistics, UCSD

+ more

'Syntactic satiation' is the phenomenon in which a sentence that initially sounds bad starts to sound noticeably better with repeated exposure. Snyder (2000) has shown that this phenomenon can be induced experimentally but that only some unacceptable sentence types are susceptible. In this talk, I present the results of an experiment which attempts to shed light on whether satiation effects can still be induced even when the lexical items are varied in each presentation (in Snyder's study they were not), whether satiation can be induced in other languages, and whether satiation can be applied usefully to determine the source of unacceptability of sentences in one language or across languages. I will focus on cases of illicit lack of inversion in wh-questions in English and Spanish (e.g., *What John will buy? and *Qué Juan compró?) and I will show that satiation is observed in Spanish but not in English in these cases, suggesting that different mechanisms underlie inversion in the two languages.

2004-03-09

Brain potentials related to negation and sentence verification

Lea Hald

+ more

Surprisingly little is known about the relative time courses of establishing the meaning and truth of linguistic expressions. A previous ERP study by Hald & Hagoort (2002) utilizing the N400 effect indicated that during on-line sentence comprehension, world knowledge information needed to determine the truth value of a sentence is integrated as quickly as lexical semantic information. However, an earlier ERP study by Fischler, Bloom, Childers, Roucos and Perry (1983) found that the N400 reflected a preliminary stage of sentence comprehension rather than the ultimate falseness of the sentence. Using sentences like the following, Fischler et. al found that for negative sentences the N400 reflected a mismatch between terms (robin, tree) at a preliminary stage of processing.

True, affirmative A robin is a bird.

False, affirmative A robin is a tree. (N400 for tree)


True, negative A robin is not a tree. (N400 for tree)

False, negative A robin is not a bird.

One possible explanation for the Fischler et al. results is that the sentences used always contained a categorical relationship between the first noun and the critical noun in the sentence (such as robin - bird).

In order to investigate this hypothesis we tested the original Fischler items in addition to items which did not contain this category relationship. The new items were like the following:

True, affirmative Hawaii is tropical.

False, affirmative Hawaii is cold.


True, negative Hawaii is not cold.

False, negative Hawaii is not tropical.

Contrary to the hypothesis that the original Fischler et al. results were a reflection of a categorical relationship between the first noun and the target noun, preliminary data indicate these new items replicate the original pattern of results. A discussion of these results in relationship to the N400 and sentence processing will follow.

2004-03-02

Listening to speech activates motor areas involved in speech production

Stephen Wilson

+ more

Language depends upon the maintenance of parity between auditory and articulatory representations, raising the possibility that the motor system may play a role in perceiving speech. We tested this hypothesis in a functional magnetic resonance imaging (fMRI) study in which subjects listened passively to monosyllables, and produced the same speech sounds.

Listening to speech consistently activated premotor and primary motor speech production areas located on the precentral gyrus and in the central sulcus, supporting the view that speech perception recruits the motor system in mapping the acoustic signal to a phonetic code.

2004-02-17

"Redefining semantic and associative relatedness"

Ken McRae U. of Western Ontario
Mary Hare Bowling Green State University Patrick Conley, U. of Western Ontario

+ more

The concepts of semantic and associative relatedness are central in both psycholinguist and memory research. However, over time the definition of semantic relatedness has become overly narrow(limited to category co-ordinates), whereas the operationalization of associative relatedness (word association norms) has become its definition. These facts have led to confusion in the semantic memory and language understanding literatures, both theoretically and methodologically. The goals of this research are to redefine and resituate semantic and associative relatedness (and thus the structure of semantic memory), argue that "mere association" does not exist, re-evaluate the priming literature in this new light, and offer suggestions regarding future research.

2004-02-10

Morphological Universals and the Sign Language Type

Mark Aronoff, Irit Meir, Carol Padden, & Wendy Sandler

+ more

The morphological properties that vary across the world's languages often come in clusters, giving rise to a typology. Underlying that typology are more general properties, found in most of the world's languages, and claimed to be universals. Natural sign languages define a new category that is at once typological and fully general: they appear to be characterized universally by modality specific morphological properties. Many of these properties, taken individually, are not outside the range of morphological possibilities found in spoken languages. It is the predictability with which the properties cluster in sign languages, together with the rapidity with which they develop in these young languages, that define the language type.

In addition to modality driven universals, sign languages we have studied also show language particular processes that are more directly comparable to those of spoken languages. Our goal is to identify universal features of morphology in human language that underlie both.

The sign language universal process we describe here is verb agreement.

The system has regular and productive morphological characteristics that are found across all sign languages that have been well studied: (1) Only a subset of verbs are marked for agreement (Padden, 1988). (2) That subset is predictable on the basis of their semantics; they involve transfer (Meir, 1998). (3) The grammatical roles that control agreement are source and goal. (4) The system is fully productive. (5) The formal instantiation of agreement is simultaneous rather than sequential. Our claim is that the universality of this system in sign languages, and the relatively short time span over which it develops, derive from the interaction of language with the visuo-spatial domain of transmission.

Yet at the same time, as its label suggests, verb agreement in sign languages follows the same syntactic restrictions as in spoken

languages: in all languages, verbs may agree only with indexically identifiable properties of their subjects and objects (person, number, and gender in spoken languages; referential indices in sign languages).

This indicates that the mechanism of agreement is universally available to human language (Aronoff, Meir, & Sandler, 2000).

We present new evidence that even iconic, sign language universal morphology does not arise overnight. Current work on a new, isolated sign language used in a Beduin village reveals the kernels of verb agreement that have not yet developed into a full-fledged morphological system. We conclude that: (1) universal morphological properties underlie sign language typical grammar, (2) modality of transmission can have a profound influence on grammatical form, and (3) despite the predictable influence of modality on language form, the normal course of language development and change is detectable in sign language.

2004-02-03

Language movement in Scientific Discourse

Robert Liebscher
Richard K. Belew

+ more

We focus on academic research documents, where the date of publication undoubtedly has an effect both on an author's choice of words and on a field's definition of underlying topical categories. A document must say something novel and also build upon what has already been said. This dynamic generates a landscape of changing research language, where authors and disciplines constantly influence and alter the course of one another.

2004-01-27

Syntactic persistence in non-native language production

Susanna Flett School of Philosophy, Psychology & Language Sciences University of Edinburgh

+ more

A key aim of second language (L2) research is to determine how syntactic representations and processing differ between native and non-native speakers of a language. My PhD work is focused on using syntactic priming tasks to investigate these differences. Syntactic priming refers to the tendency people have to repeat the type of sentence construction used in an immediately preceding, unrelated sentence. This effect suggests the existence of mental representations for particular syntactic constructions, independent of particular words and meanings.

I will describe a study from my first year project which used a dialogue task and a computerized task to look at priming of actives and passives in Spanish. Participants were native speakers, intermediate L2 speakers and advanced L2 speakers of Spanish (for whom English was the L1). Results demonstrated a significantly stronger priming effect in the L2 speakers compared with the native speakers. This may be explained by passives being more common in English than Spanish, and this preference being transferred to the L2. In addition, for L2 speakers the message-to-syntax mappings will be relatively weaker than those in a native speaker and so more susceptible to priming manipulations. I will discuss these results and describe plans for future studies using this technique to look at L2 speakers.

2004-01-20

Discourse Adjectives

Gina Taranto

+ more

In this talk I introduce Discourse Adjectives (DAs), a natural class whose members include apparent, evident, clear, and obvious, as in:

(1) a. It is clear that Briscoe is a detective.

      b. It is clear to you and me that Briscoe is a detective.

Of primary concern are the semantics of DAs in sentences like (1a), in which the conceptually necessary experiencer of clear is not expressed syntactically, and is interpreted much like (1b), with the relevant experiencers of clarity interpreted as the discourse participants - that is, both the speaker and the addressee.

I argue that the meanings of utterances such as (1a) are highly unusual semantically, in that they operate entirely on a metalinguistic level. Interlocutors use such utterances to provide information about their conversation rather than their world. Sentence (1a) does not provide new information about Briscoe, rather, it provides information about the interlocutor's beliefs about the designated proposition, in terms of the current conversation.

My analysis begins with a Stalnakerian model of context-update, as formalized by Heim (1982, 1983) and Beaver (2000). I augment this model with Gunlogson's (2001) representation of individual commitment sets of speaker and addressee within the Common Ground of a discourse, and Barker's (2002) compositional theory of vagueness.

My proposal relies on the (vague) degree of probability that the Discourse Participants assign to the truth of a proposition; the context-update effect of an utterance of (1a) removes from consideration those possible worlds in which the discourse participants do not believe the proposition expressed by Briscoe is a detective satisfies a vague minimum standard for 'clarity'. The semantics of utterances with DAs are shown to depend directly on probability, and only indirectly on truth. I argue that after an utterance with a DA is accepted into the Common Ground, interlocutors are licensed to proceed as if the designated proposition is true, if only for the current discussion.

DAs are argued to have the ability to publicly commit all discourse participants to the content of their complements. This is shown to have a synchronization effect on the Common Ground of a discourse, which explains how it can be useful to have an expression type that has no normal descriptive content.

2004-01-13

"Cultural differences in non-linguistic rhythm perception: what is the influence of native language?"

John Iversen & Ani Patel

+ more

Experience with one's native language influences the way one hears speech sounds (e.g., phonemes), and enculturation in a particular musical tradition influences the perception of musical sound. However, there is little empirical research on cross-domain influences: Does one's native language influence the perception of non-linguistic sound patterns? To address this issue, we have focused on rhythm, an important dimension of both language and music. We examined the perception of one aspect of rhythm (grouping) by native speakers of English and Japanese, languages with distinct rhythmic structure. We constructed simple sequences of tones alternating in either amplitude (loud-soft), pitch (high-low), or duration (long-short). Non-musician listeners were asked to indicate their perceptual grouping of tone pairs (e.g., loud-soft or soft-loud) and the certainty of their judgment.

Native English speakers in San Diego and native Japanese speakers in Kyoto participated, each responding to a total of 32 stimuli. We found a dramatic difference between English and Japanese speakers in the perception of duration sequences: Japanese speakers preferentially chose a long-short grouping, while English speakers strongly preferred a short-long grouping. In contrast, no marked differences were seen in the other conditions. We examine the hypothesis that the rhythmic structure of language creates perceptual biases that influence non-linguistic rhythm perception. Specifically, we compare the rhythmic structure of Japanese and English words to see if long-short syllabic patterns are more common in Japanese than English, and vice-versa.

2003-12-02

ERP Associated to Gender and Number Agreement during Syntactic Processing

Horacio Barber

+ more

Languages tend to represent gender as a conceptual characteristic or/and as a formal property of words. In contrast, number is always considered a conceptual feature signalling the quantity of the referent. Moreover, from a lexical point of view, gender information is probably retrieved directly from the word form, whereas number is considered a morphological marking that combines with the stem it modifies. These lexical features probably have relevant consequences on the syntactic level.

The role of grammatical gender and number representations in syntactic processes during reading in Spanish was studied in two different experiments. ERPs were recorded while Spanish speakers read word pairs (Experiment 1) or sentences (Experiment 2) in which gender or number agreement relationships were manipulated. Disagreement in word pairs formed by a noun plus an adjective (e.g., faro-alto [high- lighthouse]) produced a N400-type effect, while word pairs formed by an article plus a noun (e.g., el-piano [the-piano]) showed an additional left-anterior negativity effect (LAN). The agreement violations with the same words inserted in sentences (e.g., El piano estaba viejo y desafinado [the m-s piano m-s was old and off-key]) resulted in a LAN-P600 patron. Differences between grammatical gender and number disagreement were found in late measures. In the word pairs experiment, P3 peak latency varied across conditions, being longer for gender than for number disagreement. In a similar way, in the sentence experiment, the last segment of the P600 effect was larger for gender than for number violations. These ERP effects support the idea that reanalysis or repair processes after grammatical disagreement detection could involve more steps in the case of gender disagreement.

2003-11-25

Locality, Frequency, and Obligatoriness in Argument Attachment Ambiguities

Lisa King

+ more

Within the context of human sentence comprehension, one intensely investigated issue is whether sentence processing is immediately influenced by non-structural information. One potential problem with previous studies is that the grammatical function of the ambiguous constituent was typically manipulated along with the attachment ambiguity under consideration. In (1) the prepositional phrase (PP) can either be a modifier of the verb (1a) or an argument of the noun (1b) (adapted from Clifton et al 1991).

(1) a. The man expressed his interest in a hurry during the storewide sale.

b. The man expressed his interest in a wallet during the storewide sale.

Some studies provide evidence that ambiguous constituents are preferentially processed as arguments (e.g. Schutze & Gibson 1999), whereas other studies show limited or no argument preference (Kennison 2002, Ferreira & Henderson 1990). It is therefore difficult to determine if there was any effect of grammatical function ambiguity in previous studies.

The experiments to be discussed employed a moving window reading paradigm to investigate such factors as locality, obligatoriness of the argument, and co-occurrence frequency, while holding constant the grammatical function of the ambiguous constituent (2). In (2a), the ambiguous PP complement must attach to the matrix verb. In (2b), the ambiguous PP complement must attach to the embedded verb.

(2) V[obligatory PP complement]-NP-that-NP-V[optional PP complement]-PP

a. Phoebe put the magazines that Jack left under the bed before she made it.

b. Phoebe put the magazines that Jack left under the bed into the closet.

Three experiments tested the predictions made for the sentences in (2) by the Garden Path Theory (GPT; Frazier 1979), the Dependence Locality Theory (DLT; Gibson 1998, 2000), the Late Assignment of Syntax Theory (LAST; Townsend & Bever 2000), and the Co-occurrence Frequency Hypothesis (CFH).
The GPT and the DLT predict that only the structure in (2a) should have a garden path. Conversely, the LAST predicts that only the structure in (2b) should have a garden path. A corpus analysis and a production task permitted the embedded verbs to be divided into three categories: neutral bias, bias for, and bias against a PP complement. The CFH predicts that the structure in (2a) should have a garden path when the embedded verb is biased for a PP complement, and the structure in (2b) should have a garden path when the embedded verb is biased against a PP complement. The results from all three experiments showed that the structure in (2a) had the pattern of reading difficulties predicted by the GPT and the DLT.
Turning to the structure in (2b), the results from the experiment using neutral-bias verbs were also consistent with the predictions made by the GPT and the DLT. The results from the two experiments which used biased verbs, however, showed patterns of reading difficulties that were not predicted by the GPT and the DLT, but may be accounted for by positing a role for the frequency of co-occurrence between the embedded verb and its optional PP complement.

2003-11-17

Is there a dissociation between verbal and environmental sound processing in young children?

Alycia Cummings

+ more

This study directly compared 15-, 20-, and 25-month-old infants’ (n = 11, 15, and 14, respectfully) knowledge of familiar environmental sounds to their related verbal descriptions, i.e. "Moo" versus "Cow Mooing".
Children were also placed into one of two verbal proficiency groups: Low (<200 productive words) or High (>200 productive words). Using an online picture/auditory word-matching paradigm, where the aural stimuli were either environmental sounds or their linguistic equivalents, infants comprehension was measured for speed and accuracy in the identification of a target object.

Looking time accuracy improved across age levels (F=7.93, p<.001), demonstrating that some verbal and sound knowledge is related to life experience and/or maturational factors. Infants who were more verbally proficient also responded more accurately in the experiment than infants with small productive vocabularies (F=8.4, p<.006). The interaction between age group and linguistic domain was not significant, suggesting that children in each age group respond in similar manners to both speech and sound object labels. The interaction between CDI grouping and domain did reach significance: Infants with smaller productive vocabularies did respond more accurately to sound than to verbal labels, a differentiation between modalities that disappeared in children with larger vocabularies (F=10.03, p<.003).

Infants' looking time accuracy was also temporally sensitive. As more auditory information became available, all of the infants responded more accurately (F=41.35, p<.0001). This demonstrates that comprehension is not a static state, as even the youngest infants appeared to be constantly monitoring and updating their environmental representation.

This experiment provided no evidence for a delayed onset of environmental sound comprehension nor for the domain specificity of language. Since the youngest and most language inexperienced infants showed differential responding to sounds versus speech, environmental sounds could be a precursor to language by providing a bootstrap to later acquisition for some children. But the most consistent pattern was the finding that both speech and meaningful sounds appeared to co-develop at each age, thus contributing to the mounting evidence suggesting that recognition of words and recognition of familiar auditory sounds share a common auditory processor within the brain. Supposed "language specific" cognitive processes are now being implicated in what would otherwise be considered nonlinguistic tasks.

2003-11-04

First language acquisition: Building lexical categories from distributional information.

Holger Keibel

+ more

Recent methodological advances in computational linguistics demonstrated that distributional regularities in linguistic data can be one reliable and informative source of evidence about lexical categories (noun, verb, etc.). This might help to explain the observed robustness of early grammatical development despite the noisy and incomplete nature of the language samples that infants are exposed to. In this context, Redington, Chater, & Finch (1998) and Mintz, Newport, & Bever (2002) extensively explored co-occurrence statistical approaches: Words which tend to co-occur with the same kinds of neighboring words were inferred to be members of the same category.

We applied this general paradigm to child-directed speech in large high-density corpora. Beyond verifying the potential usefulness of distributional information, we sought to identify the precise regularities which this usefulness mainly relies upon. The results not only account for the robustness of the co-occurrence approach, they also reveal why it is more informative about some categories than others. This might in turn help to explain empirical findings regarding the order in which lexical categories typically emerge in first language acquisition (e.g. Olguin & Tomasello, 1993; Tomasello & Olguin, 1993). The focus of my talk will be on the differences between the categories noun and verb.

2003-10-21

"In search of the brain's organization for meaning: the N-V double-dissociation and other interesting phenomena"

Analia Arevalo

+ more

In this talk I will present some of the work we have conducted in our search to understand how the brain is organized for meaning, both linguistic and non-linguistic. In one of our studies, we began by exploring the notion of Noun-Verb dissociations, which has often been studied but remains controversial. We tested a group of 21 aphasic patients (Wernicke's, Broca's, and Anomics) along with a group of college-aged and age-matched controls on a word production task which counterbalanced noun and verb stimuli across three popular production modalities: Picture-naming (PN), reading and repetition. Results revealed that PN was the most difficult task across groups, and also the only modality in which any significant Noun-Verb differences were observed (contrary to other similar studies using blocked presentations of Noun-only or Verb-only stimuli). In addition, analyses over items revealed that all groups displayed a Noun advantage (commonly seen for healthy subjects and contrary to the notion of a Verb advantage in certain brain-injured groups). However, analyses over subjects revealed one piece of evidence for a Verb advantage in Wernicke¹s aphasics, who were significantly faster at processing verbs than nouns (again, only in the PN modality). These results led us to search for possible outliers and analyze these patients' performance on an individual basis. I describe three ways in which we conducted this outlier search and discuss our findings as well as ways of applying neuroimaging techniques, such as fMRI and VLSM (Voxel-based Lesion-Symptom Mapping) to this type of data. In addition, I discuss ways in which we have steered away from the Noun-Verb lexical distinction, to deeper, sensorimotor-based distinctions. In particular, we have investigated the notion of manipulability (items that do or do not involve hand imagery) as another way in which these same Noun-Verb stimuli may be categorized. I describe current studies which have focused on this question as well as how we applied it to our own data with aphasics by creating our own objective classification of hand imagery based on our Gesture Norming study with healthy controls. I describe this study as well, along with some preliminary results and its relevance and application to the many questions we pose.

2003-10-14

"Brain areas involved in the processing of biological motion revealed by voxel-based lesion-symptom mapping (VLSM) and fMRI"

Ayse Pinar Saygin

+ more

Image sequences constructed from a dozen point-lights attached to the limbs of a human actor can readily be identified as depicting actions. Regions in lateral temporal cortex (most consistently in the vicinity of the superior temporal sulcus, STS), which respond to this kind of motion, have been identified in both human and macaque brains. On the other hand, in the macaque brain “mirror neurons” which fire during both action production and passive action observation have been found in frontal cortex. Subsequent work has revealed that observing others’ actions leads to activations in inferior frontal cortical areas in humans as well. In humans, it appears that this response is relatively left-lateralized and overlaps partially with areas of the brain known to be involved with processing language. This posterior-frontal network is of interest to many cognitive scientists because it helps provide a unifying neural basis for perception, action, and perhaps even language.

Given that point-light biological motion figures depict actions, could their perception also recruit frontal cortex in a similar manner? Or are these stimuli too simplified to drive the neural activity in these frontal action observation areas?

We addressed this question in two studies: The first was a neuropsychological study which tested biological motion perception in left-hemisphere injured patients. Brain areas especially important for biological motion processing were identified using voxel-based lesion symptom mapping (VLSM). The second was an fMRI study on healthy normal controls. We scanned participants as they viewed biological motion animations, "scrambled" biological motion animations (which contain the local motion vectors but not the global form) and static frames from the same animations (baseline condition). Data were analyzed using surface-based techniques including high-resolution surface-based intersubject averaging.

Collaborators: S.M. Wilson, E. Bates, D.J. Hagler Jr., M.I. Sereno

2003-10-06

"Using CORPUS Data to Model Ambiguity Resolution and Complementizer Use"

Doug Roland / Jeff Elman / Vic Ferreira

+ more

Structural ambiguities, such as the post-verbal Direct Object/Sentential Complement ambiguity, which occurs in examples such as (1), where the post-verbal NP can either be a direct object (2) or a subject (3), have long been used to study sentence processing.

(1)   The people recalled the governor .

(2)  . and elected a new one (DO).

(3)  . was still in office (SC-0).

However, very important questions remain unanswered: How much information is available to the comprehender as they process a structurally ambiguous sentence and is the information used to resolve this ambiguity specific to these structures or is it more general information associated with subject and object-hood? If the information used to resolve this ambiguity is generic, then sentential complement examples with (4) and without (5) the complementizer that should have similar properties relative to direct object examples. However, some evidence suggest complementizer use is not arbitrary (i.e. Ferreira & Dell, 2000; Hawkins, 2002; Thompson & Mulac, 1991).

(4)  Chris admitted that the students were right (SC-that).

(5)  Chris admitted the students were right (SC-0).

We use the 100 million word British National Corpus to investigate the extent of ambiguity and the amount and specificity of the information available for disambiguation in natural language use (in contrast with the isolated contexts used in psycholinguistic experiments). We prepared a database of the approximately 1.3 million sentences in the BNC that contained any of the 100 DO/SC verbs listed in Garnsey, Pearlmutter, Myers, and Lotocky (1997), and identified all DO, SC-0, and SC-that. These examples were labeled for a variety of formal and semantic properties. The formal properties included the length of the subject and post verbal NPs and their heads, and the logarithm of the lexical frequency of the heads of the subject NP and the post-verbal NP. The semantic properties consisted of automatically ranking the subject and post-verbal NPs and their heads on twenty semantic dimensions based on Latent Semantic Analysis (Deerwester, Dumais, Furnas, Landauer, & Harshman, 1990).

We then performed a series of regression analyses on this data. Our main findings include:

(1) The resolution of 86.5% of DO/SC-0 structural ambiguities can be correctly predicted from contextual information. This suggests sufficient information is nearly always available to determine the correct structural analysis - before reaching the point traditionally considered to be the disambiguation point. Additionally, through the analysis of cases where the model predicts the wrong subcategorization, it allows for the identification and detailed analysis of truly ambiguous or garden-path cases.

(2) The factors used by the model to resolve the DO/SC-0 ambiguity cannot be used to correctly identify pseudo-ambiguous SC-that examples as sentential complements. In fact, SC-that and DO examples have similar properties and form an opposition to SC-0 examples.

(3) The presence/absence of the complementizer that can be correctly predicted by the model in 77.6% of SC-0/SC-that examples, supporting previous evidence that complementizer use is not arbitrary. That is used specifically in cases where the SC has properties that are similar to DO examples.

2003-05-20

"Admitting that admitting sense into corpus analyses makes sense"

Mary Hare Bowling Green State University

+ more

Linguistic and psycholinguistic research has documented that there is a close relationship between a verb's meaning and the syntactic structures in which it occurs, and that learners and comprehenders take advantage of this relationship during both acquisition and processing (e.g. Dowty, 1991; Fisher, Gleitman, & Gleitman, 1991; Hare, McRae, & Elman 2003; Jackendoff, 2002).

In the current work we address the implications of these facts for issues in structural ambiguity resolution. Specifically, we argue that comprehenders are sensitive to meaning-structure correlations based not on the verb itself (as recent work on verb bias effects suggest) but on the verb's specific senses, and that they exploit this information during on-line processing.

In a series of corpus analyses, we first look at the overall subcategorization biases of a set of verbs that that allow multiple subcategorization frames. The results of the first analysis demonstrate that individual verbs show significant differences in their subcategorization profiles across corpora. However, many verbs that take both direct object (DO) and sentential complement (SC) subcategorization frames differ in meaning between the two cases. (e.g. admit in the sense 'let in' must occur with a DO, while in the sense 'confess/concede' it may take either frame).

In a second corpus analysis, using a set of verbs taken from recent psycholinguistic experiments, we test the extent to which sense differences of this sort underlie the cross-corpus inconsistency in bias (cf. Roland & Jurafsky, 2002). Individual senses for the set of verbs were taken from WordNet's Semantic Concordance (Miller, Beckwith, Fellbaum, Gross, & Miller, 1993). Corpus examples were annotated for verb sense, and subcategorization biases were then determined for the individual senses, rather than for the verb itself. When bias estimates were calculated at the level of sense, they were much more stable across corpora. This suggests that the correlations between meaning and structure are most reliable at this level, that therefore this is a more likely source of information for comprehenders to exploit.

Finally, we apply the results of these analyses to recent experiments on the use of verb subcategorization bias in ambiguity resolution, and show that the degree of consistency between sense-contingent subcategorization biases and experimenters' classifications largely predicts a set of recent experimental results. We argue from these findings that verb bias reflects comprehenders' awareness of meaning-form correlations, and comprehenders form and exploit these correlations at the level of individual verb senses, rather than the verb in the aggregate.

2003-05-13

"Language, music, syntax, and the brain"

Aniruddh D. Patel The Neurosciences Institute

+ more

Language and music afford two instances of rich syntactic structures processed by the human brain. The question of the cognitive and neural relationship of these two syntactic systems is of interest to cognitive science, as it addresses the much-debated issue of modularity in language processing. Recent evidence from neuroscience regarding the relation of linguistic and musical syntax appears paradoxical, with evidence in favor of overlap from neuroimaging, and evidence against overlap from neuropsychology (dissociations). In this talk I use current cognitive theories of linguistic and musical syntactic processing to suggest a resolution to the paradox and to generate specific predictions to guide future research. The need for research on musical processing in aphasia will be discussed.

2003-05-06

"The integration of semantic versus world knowledge during on-line sentence comprehension"

Lea Hald

+ more

The current research was aimed at addressing several specific questions regarding the integration of world knowledge during language comprehension. First, what is the time course of the on-line integration of semantic and world knowledge information? Secondly, which are the crucial brain areas involved in these processes?

It is a long-standing issue whether or not semantic information is prepackaged into the mental lexicon and therefore more immediately available than world knowledge that is necessary to assign a truth-value to a sentence. Two ERP studies were performed to investigate this question. Subjects were presented with sentences like the following types (critical words are underlined):

(a) "Amsterdam is a city that is very old and beautiful." (Correct)

(b)"Amsterdam is a city that is very new and beautiful." (World Knowledge Violation)

(c) "Amsterdam is a city that is very thin and beautiful." (Semantic Violation)

Sentence (b) is semantically well-formed, but not true, when considering the founding date of Amsterdam. In contrast, in sentence (c) the semantics of the noun "city" makes the adjective "thin" not applicable. The question was whether or not the waveforms for (b) would result in an N400 effect with the same latency and topography as a lexical semantic N400-effect (c).

The ERP waveforms for both (b) and (c) resulted in a clear and sizable N400 effect, with comparable onset and peak latencies. Additionally, (c), but not (b) resulted in an additional late positivity with a posterior distribution.

To address the second issue: what are the crucial brain areas involved in these processes, a fMRI version of the experiment was performed. Results indicated that both (b) and (c) activated the left inferior frontal gyrus. In addition, (c), but not (a) or (b), resulted in activation of the left posterior parietal region. Post-integration processes may be responsible for this differential activation found for the world knowledge and semantic conditions.

The results of this research indicate that during on-line sentence comprehension world knowledge information is integrated as quickly as lexical semantic information. The left prefrontal cortex might be involved in an aspect of this recruitment/integration process.

2003-04-22

"Flexible Induction of Meanings and Means: Contributions of Cognitive and Linguistic Factors"

Gedeon O. Deak Dept of Cognitive Science, UCSD
Gayathri Narasimham Dept of Psychology & Human Development, Vanderbilt University

+ more

The ability to rapidly adapt representational states and responses to unpredictable cues and exigencies is fundamental to language processing. Adaptation depends on flexible induction: selection of different regularities and patterns from a complex stimulus array in response to changing task demands. Flexible cognitive processes are believed to change qualitatively between 2 and 6 years, in conjunction with profound changes in language processing. Until recently, however, there was little data on the extent and source of changes in flexible induction in early childhood. This talk describes evidence of a shift from 3 to 4 years, in typically developing children, to improved ability to adapt to changing task cues. The shift spans verbal tests, flexible induction of word meanings and flexible rule use, as well as a new non-verbal test of flexible induction of "means," or object functions. These results imply that certain higher-order cognitive and social skills contribute to semantic and pragmatic development in early childhood.

2003-04-15

Action comprehension in aphasia: Linguistic and non-linguistic deficits and their lesion correlates

Ayse Saygin

+ more

We tested aphasic patients' comprehension of actions with the aim of examining processing deficits in the linguistic and non-linguistic domains and their lesion correlates. 30 left-hemisphere damaged patients and 18 age-matched control subjects matched pictured actions (with the objects missing) and their linguistic equivalents (printed sentences with the object missing) to one of two visually-presented objects. Aphasic patients had deficits in this task not only in the linguistic domain but also in the non-linguistic domain. A subset of the patients, largely consisting of non-fluent aphasics, showed a relatively severe deficit in the linguistic domain compared with the non-linguistic domain, but the reverse pattern of impairment was not observed. Across the group, deficits in the linguistic and non-linguistic domains were not tightly correlated, as opposed to prior findings in a similar experiment in the auditory modality (Saygin et al, 2003). The lesion sites that were identified to be important for processing in the two domains were also independent: While lesions in the inferior frontal gyrus, premotor and motor cortex and a portion of somatosensory cortex were associated with poor performance in pantomime interpretation, lesions around the anterior superior temporal lobe, the anterior insula and the supramarginal gyrus were associated with poor reading comprehension of actions. Thus, linguistic (reading) deficits are associated with regions of the brain known to be involved in language comprehension and speech articulation, whereas action comprehension in the non-linguistic domain seems to be mediated in part by areas of the brain known to subserve motor planning and execution. In summary, brain areas important for the production of language and action are also recruited in their comprehension, suggesting a common role for these regions in language and action networks. These lesion-symptom mapping results lend neuropsychological support to the embodied cognition and 'analysis by synthesis' views of brain organization for action processing.

2003-04-08

"Generative grammar and the evolution of language"

Stephen Wilson Neuroscience Interdepartmental Program, University of California, Los Angeles

+ more

The detailed and specific mechanisms presumed by generative linguistic theories to be innate have long been thought by many to be difficult to reconcile with Darwinian evolution. In a recent review article, Hauser, Chomsky & Fitch (2002) have formulated a novel perspective on the problem, proposing a distinction between "broad" and "narrow" conceptions of the faculty of language. They argue that whereas many aspects of the "broad" faculty of language (sensory-motor and conceptual-intentional) have animal homologues and may have evolved by familiar mechanisms, the "narrow" faculty of language--roughly, generativity and recursion--is unique to humans and may constitute an exaptation. It seems to be implied that much of the complexity of grammar is an outcome of interactions between the recursive component and the broader aspects of the language faculty, in a manner strikingly reminiscent of recent emergentist approaches. In this talk I will discuss Hauser et al.'s arguments in detail, showing the continuity between their position and similar but less explicit suggestions made over the last few decades in the generative literature. The increasingly clear role of emergence in explaining grammatical complexity is welcome, but I will argue that "recursion", which plays a somewhat monolithic role in the authors' model, needs to be understood as more of a complex, multi-faceted set of processes.

2003-03-11

"The behavior of abstract and concrete words in large text corpora"

Rob Liebscher
David Groppe

+ more

Over the past five decades, psycholinguists have uncovered robust differences between the processing of concrete and abstract words. One of these is the finding that it is easier for people to generate possible contexts for concrete words than for abstract words; that is, concrete words seem to have higher "context availability" (CA) than abstract words. Some have argued that this difference is the root cause of other basic processing differences between concrete and abstract words. For example, concrete words are typically easier to identify, read, and remember than abstract words.

While the greater context availability of concrete words is well established, it is not clear why this is so. Schwanenflugel (1991) hypothesized that abstract words may be used in a greater variety of semantic contexts than concrete words, and are therefore less likely to be part of a "prototypical" context that is easy to generate. This hypothesis is difficult to test with psycholinguistic methods, but is readily testable with corpus analysis techniques.

Audet and Burgess (1999) tested CA by measuring the "context density" (the percentage of non-zero elements in a word's co-occurrence vector) of concrete and abstract words in Usenet corpora. They found that the set of abstract words had a higher context density than concrete words, and argued that this confirmed Schwanenflugel's speculation.

We re-evaluate the results of Audet and Burgess (1999) using similar corpora and a carefully chosen subset of their words, controlled for high concreteness and high abstractness ratings. In addition to the set of words used by Audet and Burgess, we use another set of words that were rated as being more prototypically concrete or abstract. We show that context density is seriously confounded with word frequency (r =0.96) in both sets and that differences in context density disappear when frequency is controlled for.

We then demonstrate that a more appropriate measure of contextual constraint is the entropy of a word's co-occurrence vectors. Entropy reflects not only the presence but also the strength of association between a word and a context, and is not correlated with frequency. Entropy indicates that, in both sets, concrete words appear in more variable contexts than abstract words. This result runs counter to Schwanenflugel's hypothesis and suggests a rethinking of the psychological basis of context availability.

2003-03-04

"The relationship of eye gaze and agreement morphology in ASL:
An eye-tracking study"

Robin Thompson
Karen Emmorey
Robert Kluender

+ more

The licensing of agreement is a crucial feature of current syntactic theory, and as such it should be found in signed as well as spoken languages. Standard analyses of American Sign Language (ASL) propose three verb classes: agreeing verbs (e.g., BLAME), spatial verbs (e.g., GO), and plain verbs (e.g., LOVE) (Padden, 1983). Verbs are distinguished by the type of agreement morphology they occur with. Agreeing verbs are directed toward locations in signing space indicating subject/object arguments of the verb, spatial verbs toward locatives, and plain verbs do not occur with agreement morphology. However, Neidle et al. (NKMBL, 2000) claim that all verbs in ASL are agreeing, with only the manner in which agreement is marked differing across verb types. On this view, verbs can be marked with either manual agreement (verb directed toward locations associated with the subject/object), nonmanual agreement (eye gaze toward the object/head-tilt toward the subject), or through the use of an overt pronoun/nominal. While manual agreement is overtly morphological, nonmanual markings are claimed to be manifestations of abstract phi-features. If eye gaze is a possible marker of object agreement and if all verbs are underlyingly agreeing, then one would expect gaze toward object locations equally for agreeing, spatial and plain verbs (or with higher frequency for plain verbs since they lack manual marking). The NKMBL analysis also predicts eye gaze accompanying intransitive verbs towards the subject or the addressee (the default direction). Finally, plain verbs with null object pronouns must be marked with eye gaze, the only available feature-checker in this case. To test these predictions, we conducted a language production experiment using head-mounted eye-tracking to directly measure eye gaze.

Methods: Using the eye-tracker, 10 Deaf native signers, (1) told another Deaf native signer a story designed to elicit spatial and agreeing verbs, and (2) made up sentences using specified verbs. Results: Consistent with NKMBL's claims, eye gaze accompanying agreeing verbs was most frequently directed toward the location of the syntactic object (70.1%) and otherwise toward a location on or near the addressee's face. However, eye gaze accompanying spatial verbs was toward a locative (63.2%) rather than the object of transitive verbs/subject of intransitive verbs as predicted. Eyegaze accompanying plain verbs was seldom directed toward the object (11.8%), inconsistent with NKMBL's claims; gaze for these verbs was generally toward the addressee's face (45.3%) or toward 'other', a location other than subject, object, or addressee (38.6%). Also, unlike agreeing verbs, plain verbs were never produced with null object pronouns. These results argue against NKMBL's claim that all verbs are agreeing, since eye gaze accompanying plain verbs does not mark the syntactic object. Additionally, while the results do support an analysis of eye gaze as marking agreement for agreeing and spatial verbs, agreement is not uniformly with the syntactic object as claimed by NKMBL. Thus, we propose an alternative analysis of eye gaze agreement as marking the 'lowest' available argument (Subject> Direct Object>Indirect Object >Locative) of an agreeing or spatial verb.

2003-02-24

"Signposts along the garden path"

Doug Roland
Jeff Elman
Vic Ferreira

+ more

Structural ambiguities such as the post-verbal Direct Object/Sentential Complement ambiguity have long been used to study sentence processing. However, a very important question remains unanswered: How much information a does a comprehender have available as they process a structurally ambiguous sentence? We use a large corpus (the British National Corpus) to investigate the actual extent of ambiguity and how much information is available for disambiguation. The use of corpus data allows us to investigate the relative frequency and strength of disambiguating information found during natural sentence comprehension rather than that used during the comprehension the isolated contexts used in psycholinguistic experiments.

We prepared a database of approximately 1.3 million examples of the 100 DO/SC verbs listed in Garnsey, Pearlmutter, Myers, & Lotocky (1997). Of these, approximately 248,000 examples were structurally ambiguous (specifically, DO or ?that?-less sentential complement). These examples were labeled for a variety of formal and semantic properties. The formal properties included the length of the subject and post verbal NPs and their heads, and the log of the lexical frequency of the heads of the subject NP and the post-verbal NP. The semantic properties consisted of automatically ranking the subject and post-verbal NP heads on five semantic dimensions based on performing Principle Component Analysis on the WordNet Hypernyms of the heads of these NPs.

We then performed a regression analysis on this data. This analysis produced a model that was able to predict the subcategorization correctly in nearly 90% of the structurally ambiguous cases. Because of the simplicity of factors included in the model, we feel that this represents a lower bound on the amount of information available. In fact, many of the cases where the model mis-predicted the subcategorization of the example could have been correctly resolved if pronoun case had been taken into account.

Not only does this model allow for the identification of factors which can be used to resolve ambiguity, it allows for the identification and detailed analysis of truly ambiguous or garden-path cases, through the analysis of cases where the model mis- predicts subcategorization.

2003-02-18

"Neural resources for processing language and environmental sounds:
Lesion-symptom mapping of patients with left hemisphere damage and fMRI with normal controls"

Ayse Pinar Saygin & Frederic Dick

+ more

Environmental sounds share quite a few perceptual and informational features with language, thus making them useful in exploring possible links between verbal and nonverbal auditory processing. However, neural resources for processing environmental sounds, especially the degree to which these overlap with neural systems for processing language are not completely understood. To examine the relationships between environmental sound and speech processing, we used two complementary methods: behavioral and lesion analyses in patients with brain damage and fMRI with normal controls. We used a 2-alternative forced-choice design where the task was to match environmental sounds and linguistic phrases to corresponding pictures. The verbal and nonverbal task components were carefully matched through a norming study.

In Experiment 1, 30 left hemisphere damaged, 5 right hemisphere damaged patients and 19 age-matched controls were tested behaviorally and patients' impairments in the verbal and nonverbal domains were examined. Lesion mapping was conducted using both traditional overlays as well as voxel-based lesion-symptom mapping (VLSM), an analysis method and software developed by our group.

In Experiment 2, 12 participants were scanned in a 1.5 T clinical scanner using a 'sparse sampling' paradigm that minimized the effect of the acoustical noise produced by the gradient coils . Group data were analyzed in order to look for regions active during processing of environmental sounds or speech. In order to provide additional statistical power, ROI analyses were conducted using regions drawn on individual subjects' cortical surfaces. Cross-correlations of each condition's positive and negative activations (relative to the complex baseline task) were performed in order to assess whether distributed coding of domain could be observed in these ROIs.

One of our more general goals is to integrate the two methods of brain mapping that we used in this project: Lesion mapping and functional neuroimaging. Here we will present some analyses in which the lesion maps obtained via VLSM in Experiment 1 are used as masks for the fMRI data collected in Experiment 2. Therefore we will not only examine the neural correlates of environmental sound and language processsing with further precision, but also we will show how the two brain mapping methods can be used in conjunction to explore issues of interest in cognitive neuroscience.

2003-02-11

"Can speakers avoid linguistic ambiguities before they produce them?"

Victor S. Ferreira, L. Robert Slevc, and Erin S. Rogers

+ more

An expression is linguistically ambiguous when it can be interpreted in more than one semantically distinct way. Because such ambiguities potentially pose a fundamental challenge to linguistic communication, it is often assumed that speakers produce utterances that specifically avoid them. In four experiments, we had speakers describe displays that sometimes contained linguistic ambiguities by including two pictures with homophonic labels (e.g., a smoking pipe and a plumbing pipe). The experiments showed that linguistic ambiguities can be especially difficult to avoid, and that the difficulty comes from the fact that speakers are evidently unable to look ahead past the currently formulated linguistic expression to recognize that it can be interpreted more than one way, even when the alternative interpretation is itself just about to be linguistically encoded and articulated. On the other hand, speakers do avoid linguistic ambiguities when the alternative interpretation has already been described with the potentially ambiguous label. The results suggest that speakers can avoid linguistic ambiguities only by looking backwards at utterances they've already produced, not by looking forward at utterances they might be about to produce.

2003-02-04

"Computational Limits on Natural Language Suppletion"

Jeremy Boyd Department of Linguistics, UC San Diego

+ more

While most natural languages tend to contain suppletive pairs, suppletion is vastly overshadowed in all languages by regular form-to-form mappings. What enforces the cross-linguistically low level of suppletion? This work makes the intuitive argument that suppletive mappings are kept to a minimum for a very simple reason: they are harder to learn. In what follows, this point is illustrated through an examination of suppletive and regular (uniform) verbal paradigms.

Most contemporary theories of morphology offer no way to constrain the amount of suppletion that occurs in a language. In inferential-realizational theories (Stump, 2001) for instance, a verbal paradigm can be realized using either rules that enforce uniformity, or rules that allow suppletion, as in the following examples from Spanish [see Table in PDF].

Repetition of the root habl- in each cell of the paradigm for hablar gives rise to its uniform nature. In contrast, no identifiable root exists to anchor the forms that make up ser?s paradigm. As a result, the relationship between any two members of the paradigm is suppletive. The problem here is that theories that make use of these kinds of rules offer no reason to favor the class of rules that realizes a paradigm uniformly over the class that realizes a paradigm suppletively. This lack of constraint erroneously predicts that a language could contain more suppletive than uniform paradigms, or even be composed solely of suppletive paradigms.

The fact that the grammar does not provide a way to limit suppletion is not problematic, however, if we adopt the position that grammars are embedded within a biological system that has limited computational resources. In order to demonstrate the validity of such an approach, I devised a set of 11 ?languages,? each containing a different number of suppletive verbal paradigms, ranging from no suppletion, to a language in which all paradigms are suppletive. These languages were then presented to a neural network designed with a standard feedforward architecture, and running the backpropagation-of error learning algorithm (Rumelhart & McClelland, 1986). The results show that, as the number of suppletive paradigms the network is asked to master increases, learnability decreases: [see Table in PDF].

Further, there is an upward limit, or threshold, on the number of suppletive paradigms that can be learned without significantly affecting network performance. In effect, the model predicts that suppletion in natural language will be tolerated, so long as it is kept to a minimum.

Although this work focuses on the way in which performance limitations can supplement inferential-realizational theories of morphology to provide constraints on suppletion, it can be applied to other morphological theories as well, most of which (if not all) also fail to put limits on whatever mechanism they use to account for suppletion.

2003-01-28

"Empirically-oriented comparative studies of rhythm and melody in language and music"

Aniruddh D. Patel Associate Fellow, The Neurosciences Institute

+ more

In this talk I address the following questions: Does the rhythm of a composer's native language have an influence on the rhythm of his/her music? Does the processing of speech intonation have any neural relationship to the processing of musical melody? The larger issue addressed by these studies is the extent to which linguistic patterns and processes are shared with vs. sealed off from other aspects of cognition and perception.

2003-01-21

"Perturbation & adaption during language comprehension:
results from behavioral and fMRI studies"

Amy Ramage San Diego State University

+ more

The current investigation examined perturbation and adaptation during language comprehension in young normal subjects. Induced instability was studied by increasing perceptual demand (compressed sentences), syntactic demand, or both. Two experiments were conducted, one behavioral and one using fMRI technology, to explore the relations between brain responses and behavior. This presentation addresses if changes in rate of speech, syntax, or both induced an instability, or perturbation, and explores subsequent adaptation to increased instability. The results suggested that subjects develop and maintain a representation of either the syntactic frame (i.e., via a process like priming), a conscious strategy for accommodating syntactic complexity, or a rate normalization schema. The second experiment used fMRI to measure brain activation associated with perturbation and adaptation of language and showed regions active during increased demand and/or during adaptation. Those brain regions that remained active during adaptation may have been used to maintain the linguistic or perceptual frame.

2003-01-14

"Pre-attentive auditory processing of lexicality"

Thomas Jacobsen Kognitive & Biologische Psychologie
University of Leipzig

+ more

Which aspects of speech do we comprehend even while we are ignoring the input? Are isolated words processed pre-attentively? Lexicality and change detection based on auditory sensory memory representations were investigated by presenting repetitive auditory words and pseudo-words under ignore conditions in oddball blocks. In a cross-linguistic study, sound items that are words in Hungarian and pseudo-words in German and items with reverse characteristics were used. A fully crossed 2x2 design of word and pseudo-word deviants and standards was implemented. Deviant words and pseudo-words elicited the Mismatch Negativity component of the event-related brain potential. The standards? lexicality hypotheses was confirmed which holds that lexical standards lead to different default processes than non-lexical pseudo-word standards, regardless of the lexicality of the deviant. In both language groups the Mismatch Negativity was larger with word standards than pseudo-word standards, irrespective of the deviant type. It is suggested that an additional process is triggered during deviancy detection by a pre-attentive tuning in to word standards. Furthermore, in both groups the ERPs elicited by word standards were different from ERPs elicited by pseudo-word standards starting around 220 ms after the uniqueness point. This also demonstrates that the lexicality of the context affects the processing of the auditory input.

2002-12-03

Talk in the here and now

Herbert H. Clark

+ more

As people talk, they anchor what they say to the here and how-to their current common ground. Indeed, anchoring is an essential part of their communication. They do this by communicative acts of indicating. They indicate themselves as speaker and addressee; they indicate the objects and events they refer to; and they indicate certain times and places. The issue is how they manage that. In this talk I take up how people indicate things in joint activities such as building models, furnishing a house, and gossiping. The evidence I use comes from video- and audiotapes. In the end I will argue that much of what is considered background or context-and therefore non-communicative-is really made up of communicative acts of indicating.

2002-11-26

"An accounting of accounts: Pragmatic deficits in explanations by right
hemisphere-damaged patients"

Andrew Stringfellow

+ more

Right hemisphere brain damage (RHD) has typically been characterized as producing deficits in visuospatial abilities, attention deficits, and/or deficits in the processing of emotion. Over recent years, more attention has been paid to the abnormal verbal abilities that may present following RHD. These abnormalities are typically characterized as involving "non-literal" language; while some of the problems no doubt arise from the lower-level deficits above, others are putatively associated with a deficiency in theory of mind specifically, and social cognition more generally. The results of two studies are presented; these studies attempt to characterize the discourse styles of RHD patients in the production of requests for assistance and explanations for/accounts of transgressive behavior. An attempt will be made to situate these results within existing accounts of RH (dys-)function.

2002-11-18

"On the processing of Japanese Wh-Questions: An ERP Study"

Mieko Ueno

+ more

Using event-related brain potentials (ERPs), I investigated the processing of Japanese wh-questions, i.e., questions including wh-words such as 'what' and 'who'. Previous ERP studies on the processing of wh-questions in English and German have reported effects of left anterior negativity (LAN) between a displaced wh-word (filler) and its canonical position (gap). These have been argued to indicate verbal working memory load (Kluender & Kutas, 1993; Fiebach, et al. 2001). Unlike English or German wh-words, Japanese wh-words typically are not displaced, but remain in canonical Subject-Object-Verb word order (so-called wh-in-situ). Additionally, Japanese wh-words are associated with a question particle that by its clausal placement indicates what part of the sentence is being questioned (Nishigauchi, 1990; Chen, 1991), e.g., 'Did you say what he brought?' (embedded clause scope) and 'What did you say he brought?' (main clause scope). Both a self-paced reading-time study (Miyamoto & Takahashi, 2001) and an ERP study (Nakagome et al., 2001) suggest that the parser expects a question particle following a Japanese wh-element. Given the above, I tested the extent to which the neural processing of Japanese wh-questions shows similarities to the processing of English or German wh-questions.
In experiment 1, stimuli were mono-clausal wh- and yes-no-questions with the object NP (wh or demonstratives) in situ (1a) and displaced (1b). In experiment 2, stimuli were bi-clausal wh-questions (with embedded and main clause scope wh) and their structurally equivalent yes/no-question counterparts. For each experiment, a group of 20 native speakers of Japanese was used, and sentences were presented visually one word at a time.
Bi-clausal main clause scope wh-questions (2b) elicited greater anterior negativity between wh-words and corresponding question particles. This was similar to ERP effects seen between fillers and gaps in English and German, and suggests similar mechanisms for processing wh-related dependencies across syntactically distinct languages. In addition, both mono-clausal ((1a) and (1b)) and bi-clausal ((2a) and (2b))wh-questions elicited greater right-lateralized (mostly anterior) negativity at sentence end. This effect can most conservatively be interpreted as an end-of-sentence wrap-up effect. However, since such effects have consistently been reported as right posterior negativities, the possibility exists that the effect indexes a processing effect specific to a wh-in-situ language like Japanese. One possible account is the effect of the integration of sentential wh-scope.
(1) Mono-clausal stimuli

a. Ano jimotono shinbun-ni yoruto
the local -----newspaper-to according

------sono yukanna bokenka-ga toto [nani-o/sore-o] mitsuketandesu-ka.

------the brave adventurer-N finally [what-A/that-A] discovered-Q

'According to the local newspaper, did the brave adventurer finally discover what/that?'

b. Ano jimotono shinbun-ni yoruto
the local----- newspaper-to according

------[nani-o/sore-o] sono yukanna bokenka-ga toto mitsuketandesu-ka.
------[what-A/that-A] the brave adventurer-N finally discovered-Q

'According to the local newspaper, did the brave adventurer finally discover what/that?'

(2) Bi-clausal stimuli

[senmu-ga -------donna/atarashii -------pasokon-o------- katta-KA/TO]
director-N------ what.kind.of/new PC-A---------- bought-Q/that

keirika-no ------kakaricho-ga --__ --------kiki/ii-mashi-ta-ka.
accounting.sec-of ----manager-N ---------------ask/say-POL-PAST-Q

a. 'Did the manager of the accounting section ask what kind of computer the director bought?'

b. 'What kind of computer did the manager of the accounting section say the director bought?'

c. 'Did the manager of the accounting section ask whether the director bought new computer?'

d. 'Did the manager of the accounting section say that the director bought a new computer?'

2002-11-05

"Verbal working memory and language development"

Katherine Roe

+ more

I will be presenting some (or all) of my dissertation studies, which were designed to assess the relationship between verbal working memory and language development? One series of studies investigated whether children's proficiency at complex sentence comprehension was related to their verbal working memory development. The other series of experiments hoped to determine whether sensitivity to contextual cues embedded within a sentence is working memory dependent in adults and or children.

2002-10-29

"'That' as syntactic pause: Retrieval difficulty
effects on syntactic production"

Vic Ferreira

+ more

In certain common sentences, a speaker can use or omit optional words, such as the "that" in a sentence complement structure like "The poet recognized (that) the writer was boring." What is the communicative value of the mention or omission of such optional words? Two independent research threads converge to suggest an intriguing possibility: First, research on disfluent production suggests that speakers use filled pauses like "uh" and "um" specifically to communicate upcoming retrieval difficulties (Clark & Fox Tree, 2002), implying that "upcoming difficulty" is communicatively useful information. Second, research on sentence-complement production shows that speakers are more likely to omit the "that" when post-"that" material is easily retrieved from memory (Ferreira & Dell, 2000; note that similar effects have been revealed with other alternations, e.g., Bock, 1986). What has not been shown is that speakers mention "thats" more specifically when subsequent material is more difficult to retrieve; if so, then the communicative value of the "that," like a filled pause, might be to indicate upcoming retrieval difficulty.

To test this, we exploited the fact that speakers have difficulty retrieving words that are similar in meaning to other words that they have just expressed (e.g., Vigliocco et al., in press). Speakers produced sentence-complement structures in which the post-"that" material -- the embedded-subjects -- were either meaning-similar (and therefore more difficult to retrieve) or meaning-dissimilar (and therefore easier to retrieve) to three nouns in the main subjects (e.g., "The AUTHOR, the POET, and the BIOGRAPHER recognized (that) the WRITER was boring" vs. "The AUTHOR, the POET, and the BIOGRAPHER recognized (that) the GOLFER was boring."). (A separate experiment independently verified this effect of similarity on retrieval difficulty). Production was elicited with a sentence-recall procedure, where speakers read and produced sentences back after a short delay (which results in relatively free production of the "that"). The results confirmed the prediction: Speakers produced significantly more "thats" before more difficult-to-retrieve meaning-similar embedded subjects than before more easily retrieved meaning-dissimilar embedded subjects. Furthermore, meaning-similar embedded subjects were _also_ accompanied by more disfluencies, and "that"-mention and disfluency rate were significantly correlated. Thus, speakers mention "thats" more often when subsequent sentence material is more difficult to retrieve, suggesting that speakers may use "thats" (and possibly other choices of sentence form as well) to indicate such upcoming retrieval difficulties.

2002-10-22

"Verb Sense and Verb Subcategorization Probabilities"

Doug Roland

+ more

Verbs can occur in a variety of syntactic structures. For example, the verb 'fight' can be used with only the subject (he fought), with a prepositional phrase (He fought for his own liberty), or with an NP direct object (He fought the panic of vertigo). The set of probabilities describing how likely a verb is to appear in each of its possible syntactic structures is sometimes referred to as the subcategorization probabilities for that verb. Verb subcategorization probabilities play an important role in both psycholinguistic models of human sentence processing and in NLP applications such as statistical parsing. However, these probabilities vary, sometimes greatly, between sources such as various corpora and psycholinguistic norming studies. These differences pose a variety of problems. For psycholinguistics, these problems include the practical problem of which frequencies to use for norming psychological experiments, as well as the more theoretical issue of which frequencies are represented in the mental lexicon and how those frequencies are learned. In computational linguistics, these problems include the decreases in the accuracy of probabilistic applications such as parsers when they are used on corpora other than the one on which they were trained. I will propose two main causes of the subcategorization probability differences. On one hand, differences in discourse type (written text, spoken language, norming experiment protocols, etc.) constrain how verbs are used in these different circumstances, which in turn affects the observed subcategorization probabilities. On the other hand, the types of semantic contexts that occur in the different corpora affect which senses of the verbs are used. Because these different senses of the verbs have different possible subcategorizations, the observed subcategorization probabilities also differ.

This suggests that verb subcategorization probabilities should be based on individual senses of verbs rather than the whole verb lexeme, and that "test tube" sentences are not the same as "wild" sentences. Hence, the influences of experimental design on verb subcategorization probabilities should be given careful consideration.

2002-10-01

'Voxel-based Lesion-symptom Mapping'

Elizabeth Bates

+ more

Lesion studies are the oldest method in cognitive neuroscience, with references to the effects of brain injury on speech going back as far as the Edmund Smith Surgical Papyrus more than 3000 years ago. Functional brain imaging is the newest method in cognitive neuroscience; the first papers applying positron emission tomography (PET) to language activation appeared in the 1980's, and the first functional magnetic resonance imaging (fMRI) studies of language appeared in the last decade. Although there are good reasons to expect convergence between lesion and imaging techniques, their underlying logic differs in important ways. Hence any differences in brain-behavior mapping that we can detect through comparison of these two methods may be just as valuable as the anticipated similarities, if not more valuable. To conduct such comparisons, we need a format in which similarities and differences between lesion studies of patients and imaging studies of normal individuals can be compared in detail, going beyond qualitative comparisons (e.g. Brodmann's Area 44 is implicated in both lesion studies and imaging studies of speech production), toward a finer-grained quantitative assessment of the degree to which a given region contributes to normal and abnormal performance on a given task.

In this talk, I will survey results that our group (including Stephen Wilson, Ayse Saygin, Fred Dick, Marty Sereno, Bob Knight and Nina Dronkers) has obtained this summer at CRL, with a new method that we have baptized Voxel-based Lesion-Symptom Mapping (VLSM). VLSM takes the same graphic and analytic formats used to quantify activations in fMRI, and applies them to the relationship between lesion sites (at the voxel level) and continuously varying behavioral scores. In our first illustrations of this method, we compare the relationship between behavioral performance and lesion sites for several subscales of a standard aphasia battery (the Western Aphasia Battery), with a particular emphasis on fluency vs. comprehension (the primary measures to distinguish between fluent and non-fluent aphasias). VLSM maps are constructed using behavioral and structural imaging data for 97 left-hemisphere damaged patients with aphasia, whose lesions have been reconstructed in a standard stereotactic space. You will see at a glance how behavioral deficits "light up" in stereotactic space, expressed as continuously varying z-scores within each voxel for patients with and patients without lesions in that voxel, and as continuously varying statistics within each voxel that represent differences in performance between patients with and patients without lesions in that particular piece of neural tissue. The striking differences displayed for speech fluency vs. auditory comprehension are consistent with 140 years of research in aphasia. However, this is the first time that these well-known lesion-symptom relationships have been mapped using continuous behavioral scores, permitting direct visual inspection of the degree to which a region contributes to behavioral deficits.

We will also show how VLSM maps can be compared across tasks, quantifying degree of similarity (using correlational statistics) and identifying the regions responsible for various degrees of association and dissociation between (for example) fluency and comprehension. This approach to inter-map correlation is useful not only for the exploration of similarities and differences in lesion-symptom mapping across behavioral domains, but also for direct comparisons of VLSM maps of behavior with fMRI or PET maps of activation in the same (or different) behavioral measures in normal subjects. Results of VLSM studies can also be used to identify "regions of interest" for fMRI studies of normal individuals. Conversely, results of fMRI studies can be used to establish regions of interest (with lower significance thresholds) for lesion-symptom mapping using VLSM. The examples we have given here are all based on language measures. Indeed, our preliminary efforts indicate that each of the subscales of the Western Aphasia Battery (e.g. repetition, naming, reading, writing, praxis) yields its own distinct VLSM map, with various degrees of inter-map correlation. However, the method is certainly not restricted to behavioral studies of language in aphasic patients; it could be used for any behavioral domain of interest in cognitive neuropsychology. Furthermore, although VLSM requires information from groups of patients, preliminary results from our laboratories indicate that it can yield reliable results with smaller groups of patients that we have employed here -- as few as 15-20 patients, depending on the robustness of the behavioral measure and its neural correlates. It should also be possible to evaluate the lesion-symptom relationships uncovered for a single patient, by comparing the lesion location and the observed behavioral results for that patient on a given tasks or set of tasks with the lesion profile that we would predict based on VLSM maps of behavioral results for larger groups. Goodness-of-fit statistics can then be used to evaluate the extent to which an individual case conforms to or deviates from group profiles. Finally, the use of VLSM is not restricted to patients with focal brain injury (children or adults). In a pioneering series of studies by Metter, Cummings and colleagues, continuous resting-state metabolic scores were obtained for groups of aphasic patients using positron emission tomography. Continuous metabolic scores in several specific regions of interest were correlated with continuous behavioral scores for the same patients, uncovering regions of hypo-metabolism that were associated with behavioral deficits. The same approach can be taken on a voxel-by-voxel basis with VLSM, correlating continuous behavioral metrics with continuous rather than discrete lesion information. In principle, the latter may include resting-state and/or task-related metabolic scores in PET, perfusion scores on fMRI, perhaps even diffusion-tensor imaging information on regions of white matter, and/or zones of atrophy in patients with dementia. The limits of this method are currently unknown, although all applications will (in contrast with whole-head imaging studies of normals) be limited by the nature, origins and extent of the disease process that results in damaged tissue.

2002-09-09

"BIMOLA: A localist connectionist model of bilingual spoken word recognition"

Nicolas Lewy

+ more

Over the last few years, various psycholinguistic studies of bilingualism have been concerned with representational issues, such as the internal organization of the bilingual's lexicon, while fewer have examined the processes which underlie bilingual language perception. In addition, written language has been explored more than speech despite the fact that bilinguals spend more time speaking than they do writing and that, when speaking, they have to process both monolingual utterances in their two (or more) languages and mixed utterances that contain code-switches and borrowings. Based on experimental research investigating how bilinguals recognize these "guest words", we have developed BIMOLA (Bilingual Model of Lexical Access), a localist connectionist model of bilingual spoken word recognition. As inspired by McClelland and Elman's TRACE, which focuses on monolingual spoken word recognition, BIMOLA consists of three levels of nodes (features, phonemes and words), and it is characterized by various excitatory and inhibitory links within and between levels. Among its particularities, we find shared phonetic features for the two languages (in this case, English and French), parallel and independent language processing at the higher levels, and the absence of cross-language inhibition. We also note that language decisions emerge from the word recognition process as a by-product (e.g. having processed a word, BIMOLA can tell whether it was an English or a French word). The model we propose can account for a number of well established monolingual effects as well as specific bilingual findings. This talk, prepared in cooperation with Francois Grosjean, will also include a computer demonstration. Using a specially designed user interface, and time permitting, we will run various simulations on-line, display their results graphically and show some of the BIMOLA's components (lexicons, language mode, parameters, etc.).

2002-06-04

"How Chipmunks, Cherries, Chisels, Cheese, and Clarinets are Structured,
Computed, and Impaired in the Mind and Brain"

Ken McRae & George S. Cree

+ more

A number of theories have been proposed to explain how concrete nouns are structured and computed in the mind and brain, and selectively impaired in cases of category-specific semantic deficits. The efficacy of these theories depends on obtaining valid quantitative estimates of the relevant factors. I describe analyses of semantic feature production norms for 206 living and 343 nonliving things covering 36 categories, focusing on seven behavioral trends concerning the categories that tend to be relatively impaired/spared together. The central hypothesis is that given the multiple sources of variation in patient testing, multiple probabilistic factors must converge for these trends to obtain. I show that they can be explained by: knowledge type (using a 9-way cortically-inspired feature taxonomy), distinguishing features, feature distinctiveness, cue validity, semantic similarity, visual complexity, concept familiarity, and word frequency.

2002-05-28

Maximizing Processing in an SOV Language: A Corpus Study of Japanese
and English

Mieko Ueno & Maria Polinsky

+ more

A number of parser models (e. g. , Pritchett 1992; Babyonyshev and Gibson 1999) are based on the idea that syntactic attachment happens at the verbal head, which gives the parser information about semantic roles and grammatical relations of argument noun phrases. Such models predict that S(ubject)-O(bject)-V(erb) languages are harder to process than SVO languages, since the parser would have to hold both S and O until it hits V, as opposed to only holding S in SVO. However, since there is no attested difference in reaction times of SOV and SVO languages for on-line processing, we hypothesize that SOV languages have strategies to compensate for the late appearance of the verb. In particular, they may differ from SVO languages in having fewer sentences with two-place predicates where both verbal arguments are expressed.

To test this hypothesis, we conducted a comparative corpus study of English (SVO) and Japanese (SOV). For both languages, root clauses (N=800) were examined with respect to the frequency of one-place (SV: intransitives) vs. two-place (SOV for Japanese, SVO for English: transitives) predicate structures and the overt expression of all arguments. Four different genres were examined in both languages: home decoration magazines, mystery novels, books about Japanese politics, and children's utterances (from CHILDES). Japanese exhibits a significantly greater use of one-place predicates than English (for example, 62. 9% compared to the English 36. 5% in mystery novels; p < . 001 in all genres except books about Japanese politics). In addition, with two-place predicates, Japanese uses null pronouns (pro-drop), thus reducing the number of overt argument noun phrases. The use of pro-drop with one-place predicates in Japanese is significantly lower than with two-place predicates (p < . 05, in all genres except mystery novels). The differences are particularly apparent in child language, where Japanese speaking children around 3;8 had 21% transitives with 100% pro-drop and English speaking children of the same age had 71% transitives with only 33% pro-drop. A preliminary comparison with a pro-drop SVO language (Spanish, based on Bentivoglio 1992) indicates that the distribution of pro-drop across intransitive and transitive clauses is much more even.

These results suggest that there is an extra cost associated with the processing of transitive clauses in a verb-final language. To minimize that cost, Japanese uses a significantly lower percentage of full SOV structures. Thus, processing strategies in SVO and SOV languages differ in a principled manner.

2002-05-21

Understanding the functional neural development of language production and
comprehension: a first step using fMRI.

Cristina Saccuman (remotely from Milan) and Fred Dick

+ more

This study - a joint effort in the Center for Cognitive and Neural Development - is truly a developmental 'fact-finding' mission, in that we know relatively little about the neural substrates of language processing in normally-developing children. Here, we examine the BOLD response in a group of older children (10-12 yrs) and young adults (18-30) who performed our workhorse picture naming and sentence interpretation tasks in the magnet. I'll present the results of our initial analyses, and will also discuss some of the difficulties inherent in conducting and interpreting developmental fMRI experiments.

2002-05-07

Neural systems supporting British Sign Language processing

Mairead MacSweeney

+ more

Exploring the neural systems that support processing of a signed language can address a number of important questions in neuroscience. In this talk fMRI studies of British Sign Language (BSL) processing will be presented and the following issues addressed: What are the similarities and differences in the neural systems that underlie BSL and audio-visual English processing in native users of the language (deaf native signers V native hearing English speakers)? What is the impact of congenital deafness of the functioning of auditory cortex - is there evidence for cross-modal plasiticity? Does the extent to which sign space is used to represent detailed spatial relationships alter the neural systems involved in signed language processing?

2002-04-30

Teaching Children with Autism to Imitate Using a Naturalistic Treatment Approach: Effects on Imitation, Social, and Language Behaviors

Brooke Ingersoll & Laura Schreibman UCSD
Brooke Ingersoll

+ more

Children with autism exhibit deficits in imitation skills both in structured settings and in more natural contexts such as play with others. These deficits are a barrier to acquisition of new behaviors as well as socialization and communication, and are thus an important focus of of research indicates that naturalistic behavioral treatments are very effective at teaching a variety of behaviors in children with autism and mental retardation. Variations of these techniques have been used to teach language, play, social, and joint attention skills; however, as of yet, they have not yet been used to teach imitation skills. We used a single-subject, multiple baseline design across three young children with autism to assess the benefit of a newly designed naturalistic imitation training technique for young children with autism. Participants were observed for changes in imitative behavior as well as other closely related social-communicative behaviors (language and joint attention). Results suggest that this intervention can successfully increase imitative behaviors in young children with autism and also has a facilitative effect on language and joint attention.

2002-04-23

"Hierarchical organisation in spoken
language comprehension: evidence from functional imaging"

Matt Davis

+ more

Models of speech comprehension postulate multiple stages of processing although the neural bases of these stages are uncertain. We used fMRI to explore the brain regions engaged when participants listened to distorted spoken sentences. We applied varying degrees of three forms of distortion, and correlated BOLD signal with the intelligibility of sentences to highlight the systems involved in comprehension. By contrasting different forms of distortion we can distinguish between early (acoustic/phonetic) and late (lexical/semantic) stages of the comprehension process. The increased demands of comprehending distorted speech (compared to clear speech) appears to modulate processes at both of these levels.

2002-04-16

"Complex Morphological Systems and Language Acquisition"

Heike Behrens

+ more

The German plural focused prominently in the Dual Mechanism Model of inflection: Out of the eight plural markers, the -s plural is particular because it is low frequent and at the same time largely unconstrained in terms of the morphonological properties of the noun root it combines with. In the Dual Mechanism Model it was hypothesized that the -s plural serves as the default affix: Supposedly, irregular forms are stores holistically, and errors occur when lookup in memory fails. I will address the predictions of this model for language acquisition with a particularly detailed case study (12000 plural forms): Error patterns show that some highly predictable plurals are acquired without errors, whereas other sets of nouns with low predictability of the plural marker show errors rates of up to 40%. Hence, plural errors are not due to random or frequency-based "retrieval failure", but indicate ambiguities in the plural system. Second, the distributional properties of the -s plural are acquired in a piecemeal fashion by generalization over the subsets of nouns it applies to: -s errors occur only in morphonological domains where the -s plural is attested. In sum, neither plural errors nor the acquisition of the -s plural suggest that a second, default mechanism is at work.

2002-04-09

Phonological awareness in children in and out of school

Katie Alcock

+ more

Objectives
Phonological awareness is a composite skill including awareness of words, phonemes, and phonological similarities, and the ability to break down words into component parts. Skill in phonological awareness tasks predicts future or concurrent reading skill; however some phonological awareness tasks are not possible for preschool children or illiterate adults. This study aim is to investigate the direction of causality by studying children who cannot read through lack of opportunity rather than lack of aptitude.

Design
The study aimed to investigate the impact of age and schooling on phonological awareness in an age group that in Western settings would already be at school. A two by four (attending and never attended school groups, with four age groups in each schooling group) design was employed.

Methods
Matched groups of Tanzanian children aged 7 to 10 years with no schooling or in first or second grade performed reading tests and phonological awareness tests.

Results
Most phonological awareness tests were predicted better either by reading skill or by exposure to instruction than by age. Letter reading skill was more predictive of phonological awareness than word reading skill.

Conclusions
While some tests could be performed by nonreaders, some tests were only performed above chance by children who were already able to read and hence we conclude that these tests depend on reading skill, and more particularly letter reading skill. We discuss the implications of these findings for theories of normal reading development and dyslexia.

2002-03-19

Components and Consequences of Attentional Control Breakdowns in Healthy Aging and Early Stage Alzheimer's Disease

Dave Balota

+ more

A series of studies (e.g., semantic priming, semantic satition, Stroop, false memory) will be reviewed that address the nature of changes in attentional systems in healthy older adults and in AD individuals. Attempts will be made to show how attentional selection and maintainance of attentional set across time underlie some of the memory breakdowns produced in theseindividuals.

2002-03-05

A point of agreement between generations: an electrophysiological study of grammatical number and aging

Laura Kemmer

+ more

A topic of current debate in the aging literature is whether the slowing of mental processes suggested by some measures (e.g., reaction times) is a generalized phenomenon affecting all aspects of mental processing, or whether some aspects of processing are spared. In the domain of language, it has been suggested that processing, at least of some syntactic phenomena, is slowed. However, most of these studies have examined complex syntactic phenomena (e.g., relative clauses or passive formation) and have used end-product dependent measures such as response times rather than online measures of processing. Moreover, at least some of the syntactic phenomena examined are known to have a substantial working memory component, thus making it difficult to determine whether theobserved slowing is due to limitations in working memory or syntactic processing per se or both. In an attempt to tease apart the individual contribution of syntactic processing, we used electrophysiological measures to examine grammatical number agreement. This is an area ofsyntax which does not seem to have a strong working memory component. We used an on-line dependent measure, event-related potentials, so that we could better examine the time course of processing as it unfolded. We recorded ERPs from older and younger subjects as they read sentences which did or did not contain a violation of grammatical number agreement(subject/verb or reflexive pronoun/antecedent). For young and old alike, these violation types elicited a late positivity (P600/SPS), the timing of which did not differ reliably as a function of age. The distribution of these ERP effects, however, did differ with age. Specifically, in younger adults, the syntactic violations compared to their control items eliciteda positivity that was large posteriorly and small anteriorly, and slightly larger over right than left hemisphere sites. In contrast, in older adults, the effect was somewhat more evenly distributed in both the anterior-posterior and left-right dimensions: the elderly showed morerelatively more positivity over anterior sites than the young, with a more symmetrical left-right distribution. Thus, while we obtained no evidence that the appreciation of two types of number agreement in written sentences (presented one word at a time) slows significantly with normal aging, the observed difference in scalp distribution suggests that non-identical brain areas, and thus perhaps different mental processes, may be involved in their processing with advancing age.

2002-02-26

Lexical Decision and Naming Latencies for Virtually All Single Syllable English Words: Preliminary Report from a Wordnerd's Paradise

Dave Balota

+ more

Results will be reported from a study in which 60 participants provided naming or lexical decision responses to over 2800 single syllable words. These are the same items that have been the focus of connectionist models of word naming. In the first part of the talk, discussion will focus on the predictive power of available models at the item level, compared to standard predictors such as log frequency and word length. In the second part of the talk, analyses across the naming and lexical decision tasks will be provided that compare the predictive power at distinct levels (e.g., phonological onsets, word structure variables such as length, feedforward consistency, feedback consistency, orthographic neighborhood size, and word frequency, and meaning level variables such as imageability, and Nelson's set size metric). Discussion will focus on task specific differences and the role of attention in modulating the contribution of different sources of information to accomplish the goals of a specific task. Emphasis will also be placed on the utility of large scale databases in clarifying some controversies that have arisen in the smaller scale factorial designs that are still the standard in the visual word recognition literature.

2002-02-12

From Theory to Practice: Addressing the Pediatrician's Dilemma

Shannon Rodrigue

+ more

Specific Language Impairment (SLI) is a disorder that can be identified on the basis of delayed onset and protracted development of language relative to other areas of development and is generally identifiable during the preschool years. A child may be identified as being at risk for SLI before age three if she is a "Late Talker," or a child with a very small productive vocabulary at around two years of age. (Virtually all children with SLI were first Late Talkers.) The "pediatrician's dilemma" refers to the logistical difficulties associated with making a determination as to which infants or toddlers might eventually be Late Talkers and thereby also at risk for SLI. Thal (2000) has made progress toward addressing this dilemma by finding that rate of growth in comprehension vocabulary (by parent report on the MacArthur CDI) at the earliest ages of linguistic development is a strong predictor of later productive vocabulary at 28 months (at the group level). The present study evaluates whether an abbreviated version of the same parent report instrument (Short Form CDI) will yield equally positive findings. I also extend upon Thal (2000) and consider prediction at the level of individual children. Findings, the implications of these findings, and future directions are discussed in terms of theoretical and applied significance.

2002-02-05

Pushing the Limits of Word Comprehension in Normal and Aphasic Listeners

Suzanne Moineau

+ more

Most aphasiologists have agreed that, although the linguistic profiles seen in aphasics are quite complex, there has been enough evidence of similarities and differences among patients to warrant classification of these individuals into discrete groups. For more than a century now, we have known that lesions in the vicinity of Broca's area produce a non-fluent type aphasia that is characterized by telegraphic speech, with relatively preserved auditory comprehension; whereas, lesions involving Wernicke's area produce fluent type aphasias, characterized by paraphasic errors and a significant impairment in auditory comprehension. Though more recent research has uncovered deficits in the auditory comprehension of Broca's aphasics on complex, and non-canonical sentence types, there is little in the literature to suggest that Broca's aphasics have deficits with comprehension of single words, unlike Wernicke's aphasics. The differences noted in fluency and comprehension patterns have formed much of the basis for differential diagnosis of aphasia symptoms into these discrete classifiable categories. It is my contention that the deficits seen in aphasic individuals are better defined as being continuous, and as such a seemingly preserved function (like word comprehension in Broca's aphasics) may be vulnerable to breakdowns in sub-optimal processing conditions (such as noisy environments, diminished hearing associated with general aging, fatigue). The current study aimed to investigate the effects of perceptual degradation on receptive lexical processing in college-aged individuals, normal aging and individuals with brain injury (both left and right hemisphere lesions), in an attempt to uncover break points in lexical comprehension in varying populations. I won't spoil the surprise....

2002-01-22

Coherence and Coreference Revisited

Andrew Kehler

+ more

The principles underlying the interpretation of pronominal reference have been extensively studied in both computational and psycholinguistics, but little consensus has emerged. In this talk, we revisit Hobbs's (1979) hypothesis that coreference is simply a by-product of establishing discourse coherence, in light of counterevidence that has motivated attentional state theories such as Centering (Grosz et al, 1995 [1986], Brennan et al. 1987). While proponents of Centering have correctly argued that Hobbs's account cannot model a hearer's "immediate tendency" to interpret a pronoun, we show that Centering also suffers from this drawback (Kehler, 1997). We then show how a seemingly self-contradictory collection of data patterns with a neoHumian trichotomy of coherence relations that has been used in analyses of VP ellipsis, gapping, extraction, and tense interpretation (Kehler, 2002). This data can be accounted for by modeling attention within the dynamic inference processes underlying the establishment of coherence relations, as opposed to modeling discourse state on a clause-by-clause basis using superficial cues in the manner posited by attentional state theories.

2002-01-15

Embodiment and language discussion session

Ayse P. Saygin

+ more

A message from Ayse Saygin:

Hello everyone,

For the quarter's first CRL colloquium we will have a discussion session on the topic of embodiment and language, covering both linguistic and experimental aspects. The discussion will be moderated by Elizabeth Bates and Tim Rohrer. As usual, we are meeting in CSB 280 at 4:00 pm.

Hope to see you all there !

Ayse P. Saygin

2001-12-03

Verb Aspect and the Activation of Event Knowledge in Semantic Memory

Todd Ferretti

+ more

Previous psycholinguistic research has shown that verb aspect modulates the activation of event information explicitly given in a text. For example, events presented as ongoing (was verbing - past imperfective aspect) are foregrounded in a reader's mental model of the discourse, and these events (including the participants and objects associated with the events) tend to remain active for long durations if there are no further time shifts in the discourse. Alternatively, events presented as completed (had verbed - past perfect or verbed - perfective) tend to be backgrounded in the reader's mental model, decreasing the activation of the event over subsequent discourse.

How verb aspect modulates the activation of world knowledge about common events has received little attention, a fact that is surprising given the important role that background knowledge of events plays in language comprehension. The main goals of the following research were to examine 1) how verb aspect influences the activation of information about events from semantic memory, 2) how people use aspect and world knowledge to make causal bridging inferences, and 3) how semantic memory and aspect interact in phrases, sentences, and larger discourses.

A number of different experimental methodologies were employed (including semantic priming, inferencing tasks, sentence completions, and ERP) to examine these issues. Results indicate that (1) knowledge of common event locations is more activated following verbs marked as ongoing (was skating - arena) than completed (had skated - arena), (2) that people complete sentence fragments such as "The diver was snorkeling...." with locative prepositional phrases more often with past imperfective than past perfect aspect, (3) that people seem to have more difficulty integrating locative phrases following verbs marked with past perfect aspect during on-line sentence comprehension, and (4) that they utilize world knowledge about the outcomes of events differently depending on the aspectual form of the verbs denoting causal actions.

These results have implications for models of how grammatical information and background knowledge interact to constrain expectations and/or inferences about events mentioned in a discourse.

2001-11-27

"Free word order" and Focus Ambiguities: A case study of Serbo-Croatian

Svetlana Godjevac

+ more

How does scrambling interact with focus, and what are the implications for processing and acquisition? In Serbo-Croatian, informational prominence (i.e., focus) can be expressed either prosodically, by a phrase accent, or syntactically, by word order. Contrary to standard assumptions, I show that even with non-neutral word order (in this case, non-SVO), a sentence can be ambiguous with respect to focus. As an example of implications of these results, I will suggest that the claim of Radulovic (1975) that children acquiring Serbo-Croatian at the age of 1,8 through 2,8 lack pragmatic word orderings must be reconsidered. I will offer a reanalysis of her data based on my theory of focus projection that shows that Serbo-Croatian children acquire pragmatic word orderings as early as 1,8.

2001-11-20

Towards optimal feature interaction in neural networks

Virginia De Sa

+ more

I'll start by reviewing the problem of why unsupervised category learning is difficult and present an algorithm I developed that makes use of information from other sensory modalities to constrain and help the learning of categories within single modalities.  I will then show that there is a key difference in the processing required for combining inputs within a sensory modality as opposed to that required for combining inputs between sensory modalities.  Finally, I'll show that similar issues are present in supervised learning algorithms; performance can be improved by changing the way inputs interact.  I will show examples from specially constructed problems as well as real world problems where performance is improved when some of the inputs are not used as inputs but used as outputs instead.  This last part is joint work with Rich Caruana.

2001-11-13

"Halting in Single Word Production: A Test of the Perceptual Loop Theory of Speech Monitoring"

Bob Slevc

+ more

The concept of a prearticulatory editor or monitor has been used to explain a variety of patterns in the speech error record. The perceptual loop theory of editor function (Levelt, 1983) claims that inner speech is monitored by the comprehension system, which detects errors by comparing the comprehension of formulated utterances to the originally intended concepts.
In this study, three experiments assessed the perceptual loop theory bylooking at differences in the ability to inhibit word production in response to stop signals that varied in terms of their semantic or phonological similarity to the intended word. Subjects named pictures and sometimes heard (Experiment 1) or saw (Experiments 2 and 3) a word different from the picture name, which served as a signal to stop their naming response. When the signal was phonologically similar to the picture name, subjects had more difficulty stopping speech than when the signal was phonologically dissimilar to the picture name. This shows that inhibiting word production is more sensitive to phonological than to semantic similarity of a comprehended word, suggesting that errors are detected and avoided by comparing at a phonological rather than at a semantic level.

2001-11-06

Modeling semantic constraints in sentence processing

Robert Thornton

+ more

We present a sentence processing model to examine semantic effects in sentence processing. Previous connectionist work on sentence processing has used SRNs, which learn distributional information regarding sequential constraints on constituents, as well as other grammatical phenomena (i.e., agreement). The current model differs from previous work both in task (recognize the current word, rather than predict the next word) and representation (distributed semantic representations rather than localist lexical ones).

The model maps distributed syllabic representations onto distributed semantic (i.e., featural) representations. We examined the interaction of lexical, semantic, and distributional constraints in processing syntactic category ambiguities, such as "the desert trains", in which "trains" can be a noun ("the desert trains were late") or a verb ("the desert trains the soldiers").

The model was trained on 20,000 word triples from the parsed WSJ and Brown corpora. For each phrase, the network was presented with each word in succession (e.g., DESERT TRAINS ARE). The target was the correct semantics for the current word. A pair of "interpretation" nodes were connected to the semantic and context representations, encoding the interpretation of the phrase as NN or NV. High level semantic features (such as ISA-ENTITY), pragmatic constraints, and item specific regularities were all combined by the network and utilized to the extent to which they were informative, replicating the results of MacDonald (1993).

More generally, the model was able to calculate distributional statistics over the distributed semantic representations. It subsequently developed a representation of the contexts that a word appears in. Because the model generated this measure of the plausible semantics of possible continuations, it began to partially activate the relevant semantic features of the upcoming word before it was presented, such that plausible continuations (i.e., words with consistent semantic features) were easier to process. Thus, in this model, contextual facilitation arose because at given point in processing, the current input reliably cued relevant semantic features of the subsequent input (see Federmeier & Kutas, 1999; Schwanenflugel & Shoben, 1985, for support for such models). The nature of such facilitation, as well as a semantic account of grammatical processing, will be discussed.

2001-10-30

Temporal Processing and Language Disorders: Review and Evaluation

Don Robin

+ more

This discussion will overview temporal processing as a cause of language disorders in adults and children. The discussion will provide an historical overview, followed by a description of some data based studies. Finally, a discussion of the theoretical soundness of the concept with reference to a treatment called "FastForWord" will be addressed.

2001-10-22

A Connectionist Investigation of Linguistic Arguments from the Poverty of the Stimulus: Learning the Unlearnable

John Lewis

+ more

Based on the apparent paucity of input, and the non-obvious nature of linguistic generalizations, Chomskyan linguists assume an innate body of linguistically detailed knowledge, known as Universal Grammar (UG), and attribute to it principles required to account for those properties of language that can reasonably be supposed not to have been learned (Chomsky, 1975). A definitive account of learnability is lacking, but is implicit in examples of the application of the logic. Our research demonstrates, however, that important statistical properties of the input have been overlooked, resulting in UG being credited for properties which are demonstrably learnable; in contradiction to Chomsky's celebrated argument for the innateness of structure-dependence (e.g. Chomsky, 1975), a simple recurrent network (Elman, 1990), given input modelled on child-directed speech, is shown to learn the structure of relative clauses, and to generalize that structure to subject position in aux-questions. The result demonstrates that before a property of language can reasonably be supposed not to have been learned, it is necessary to give greater consideration to the indirect positive evidence in the data and that connectionism can be invaluable to linguists in that respect.

2001-10-16

THE INTERACTION OF SEMANTICS, PHONOLOGY, AND COMPOUNDING: IMPLICATIONS FOR THEORIES OF INFLECTIONAL MORPHOLOGY

Todd Haskell

+ more

In English and many other languages, the marking of qualities like noun number and verb tense has a quasi-regular character. To take noun number as an example, most nouns in English form their plural by adding the suffix '-s', e.g., 'rat' -> 'rats', 'book' -> 'books'. However, there are alternative ways of forming the plural that apply to only a few nouns or even a single noun, e.g., 'mouse' -> 'mice', 'goose' -> 'geese'.

Over the past two decades, there has been considerable debate over whether this sort of phenomenon is best accounted for by two mechanisms - one for the 'regular' cases, another for the 'exception' or 'irregular' cases – or a single mechanism which handles both sorts of cases. Sharp dissociations between the behavior of regular and irregular words have been used to argue for the dual-mechanism view. One apparent dissociation of this sort involves the interaction between pluralization and compound word formation. It has been noted that irregular plurals can appear in the modifier (left) position of noun-noun compounds, e.g. 'mice-eater', while regular plurals seem to be prohibited, e.g., '*rats-eater'.

The current project draws together reanalysis of previous work, new behavioral data, and computer modeling to argue that the constraints on plural modifiers in compounds are much more complex than the conventional characterization would suggest, and, as a consequence, that they are not easily accounted for within a dual-mechanism framework. An alternative account is proposed in which the acceptability of modifiers in compounds is determined by the interaction of multiple probabilistic ("soft") constraints. It is shown that such an approach, which does not make an explicit distinction between regulars and irregulars, actually provides a superior account of the data. Thus, the compounding phenomena, far from supporting the dual-mechanism view, actually present it with a serious challenge.

2001-10-08

Age of acquisition ratings: actions and objects.

Gowri Iyer

+ more

Certain word attributes such as frequency have been traditionally thought to be the best predictors of performance on a lexical task (e.g., picture naming). However, mounting evidence suggests that in certain lexical tasks, frequency effects may be wholly or partly explained by age-of-acquisition(AoA). In my talk tomorrow I will present the results of an age-of-acquisition study in which adults' ratings and response times were collected for 520 items (nouns) and 275(verbs). The resulting AoA ratings were (1) reliable, replicating the AoA effects reported in earlier studies (for objects only), (2) valid, correlating highly with developmental data, and (3) the most powerful predictors of performance on a picture-naming task when compared to other predictor variables such as frequency etc. Discussion focuses on attempting to understand AoA's potency as a predictor and also some future directions.

2001-10-02

Psychophysics of Verb Conjugation

Antonella Devescovi, Simone Bentrovato, Elizabeth Bates et al.

+ more

Most of what is currently known about lexical access is based on studies of English nouns, in citation form, in the visual modality, typically through some kind of lexical-decision task. There is also a small literature, important on theoretical grounds, about the processing of regular vs. irregular past tense forms of verbs (especially in English). Beyond this, surprisingly little is known about how listeners process inflected verbs -- especially in richly inflected languages, in the auditory modality. Within the context of an interactive-activation model (The Competition Model), extended to account for real-time processing (as implemented in Elman's recurrent nets), our group has been studying the processing of inflected verbs in context. It became increasingly clear to us that research on processing of inflected verbs in context is hampered by absence of basic information about how listeners recognize inflected verbs. This realization motivated us to undertake a basic parametric study of how Italian listeners perceive and process inflected verbs, presented in randomized lists. On October 2, I will attempt a "first draft" presentation of results from this study. Input from members of the CRL community (especially our colleagues in linguistics) will be not only helpful, but crucial.
Fifty native speakers of Italian (college age) participated in one of two tasks. Half were asked to repeat auditorily presented (digitized) verbs, as fast and accurately as possible (i.e. the cued shadowing technique). The other half were asked to generate a subject pronoun that agrees with the verb. Fifty different verbs (all taken from the Italian CDI to represent the first verbs acquired by children) were presented in all six person/number combinations, within four of the many tense/aspect conditions available in the language (present indicative, imperfect past, future, remote past). All analyses are conducted over items (averaging over subjects within each task), to determine the physical and linguistic properties of inflected verbs that contribute positively or negatively to reaction times in each task. Predictors include multiple measures of word length (duration of whole word; length of root and suffix after the root; length of stem and suffix after the stem; number of syllables; number of characters--a good approximation of number of phonemes in Italian), prosody (stress position; canonicity (penultimate syllable stressed)), phonetics (presence/absence of initial frication), frequency (of the whole word and of the inflected form, from a spoken-word corpus), transitivity, and whether or not the word represents a concrete action. We also assessed effects of regularity, defined three different ways (in keeping with a very confusing literature on this topic).
Results indicate that Italian listeners are exquisitely sensitive to the unfolding of word structure in real time, using multiple sources of information, quickly and efficiently. Frequency effects are observed for both regulars and irregulars, regardless of how they are defined, in contrast with predictions based on Pinker and Ullman's Dual Mechanism theory. Regularity effects on reaction time appear to be explained by lower-level factors like length, frequency and word structure. However, significant effects of tense and person (and their interaction) remain when all other predictors are controlled, suggesting that either (a) we have failed to identify all the lower-level factors that contribute to these constructs, or (b) the dimensions of tense and person are emergent properties of the system that have a causal impact on the recognition and processing of inflected verbs above and beyond their lower-level correlates. Results have (we think) some important implications for verb processing within a structured context, leading to clear predictions about the effects of context on the "recognition point" for inflected verbs.

2001-06-05

H. Wind Cowles and Victor Ferreira

+ more

Effects of Prior Mention on Sentence Production and Word Recall Studies of language production have shown that speakers tend to place easily retrieved arguments early in sentences. For example, Bock(1977) and Bock and Irwin (1980) reveal that previously given information shows an early-mention advantage, as it tends to be mentioned before new information. They suggest that this effect may come from the greater retrievability (both lexically and conceptually) of given information over new information, rather than from the discourse status of the arguments per se. However, it is still unclear from these studies whether discourse status affects sentence production directly. Also, there are different ways to establish information as given, many of which actually confer a different discourse status to that information. The present study examines the role of discourse status on sentence production by looking at topichood and givenness. Two experiments were conducted to see if discourse status affects sentence production, and if so, whether that effect is due to a more general effect of increased lexical activation.
In the first experiment, we found that topic arguments show an early-mention advantage over given arguments, suggesting that topic- versus given-status exerts a specific effect on target sentence production. Thus, a speaker's choice of a sentence structure is sensitive not only to whether an argument is mentioned previously, but how it is mentioned, such that arguments that are previously mentioned as topics are especially likely to be mentioned early.
A second experiment looking at word recall found that the effect of discourse status goes away when speakers are asked to recall a list of words rather than use them in a sentence. This suggests that the effect of discouse status is not due to lexical activation, but is specific to the process of forming sentences.

2001-05-29

Verbal and Non-verbal Auditory Processing in Aphasic Patients

Ayse Pinar Saygin

+ more

There have been findings indicating that left hemisphere lesions may cause an impairment in associative and/or semantic processing of auditory information, not only in linguistic but also in non-linguistic domains. In this talk, I will present a study of the online relationship between verbal and non-verbal auditory processing by examining aphasic patients' abilities to match environmental sounds and corresponding phrases to simple line drawings. In this study, we also manipulated the effect of competition between the visual target and foil in both verbal and non-verbal conditions. Overall, we found robust group differences in performance: All patient groups were impaired relative to normal controls. Broca's and Wernicke's aphasics were most impaired, while Anomic and RHD patients performed similarly to each other, showing less severe deficits. There was also a reliable effect of foil type (related vs. not related to the target) that generalized across groups. We found that impairments in verbal and non-verbal domains tended to go hand in hand; there was very little evidence for the relative preservation of non-verbal auditory processing in this set of aphasic patients, a result that is surprising based on the view of aphasia as a primarily linguistic deficit. Instead, the results suggest that there is significant overlap of processes and neural resources utilized in verbal and non-verbal processing of auditory information.

2001-05-22

Evidence for a U-shaped learning Curve

Michael Klieman

+ more

This study examines the acquisition patterns of English unaccusative verbs by learners of English as a second language (ESL). Previous studies of written production (Oshita 1998, 2000, Zobl 1989) found that intermediate to advanced ESL learners produced ungrammatical unaccusative forms about 10% of the time, and that the vast majority of these errors were "passive" unaccusative errors (*The boys were arrived). While one of Oshita's claims is that beginners do not make such errors, he did not control for level in his study. The present study reports two experiments building on Oshita's work, this time testing three skills: spoken production, written production, and error recognition. The experiments were also crucially controlled for level. The finding was that the acquisition pattern of unaccusatives is actually U-shaped, that is, at later stages of acquisition, ESL learners stopped producing ungrammatical unaccusative verbs, and produced only grammatical ones. The results showed that not only did the error rate in both production tasks stay constant at 10%, but that learners actually stopped producing ungrammatical unaccusative forms after the advanced level (James 1985). These data indicate that ESL learners have some hope, at least with respect to the acquisition of unaccusative verbs: in later stages of acquisition, unaccusatives structures are acquired and are no longer subject to non-target passivization. These findings are significant for the field of second language acquisition research, as the pattern of acquisition shown here closely mirrors that of first language research, suggesting that there must be some parallels between the two types of acquisition.

2001-05-15

How (Not) to Build a Language:
The Trouble With Pronouns

Ezra van Everbroeck

+ more

There is a large space of possible natural languages, but only some types are attested. This raises the question how many of the gaps are the result of historical accidents and how many are unattested because they are in some way unlearnable. Using connectionist simulations, I have explored the learnability issue by testing how easy it is to determine 'who did what to whom' in a broad range of possible languages. The linguistic parameters tested include word order, case marking, head marking, pronouns, pro-drop and agreement. In the talk, I will present results about the effect of each of these parameters and describe how they interact. I will also consider whether the connectionist models work roughly like the parsing strategies used by children acquiring a language.

2001-05-08

Can high-dimensional memory models have affordances? Comparing HAL, LSA and the Indexical Hypothesis.

Dr. Curt Burgess

+ more

High-dimensional memory models capture meaning by encoding and transforming the contexts in which words appear (HAL: Burgess & Lund, 1997; LSA: Landauer & Dumais, 1997). Glenberg and Robinson (JML, 2000) argue that the encoding of abstract symbols that are arbitrarily related to what they signify (no symbol grounding) is an implausible approach to modeling meaning. Their subjects make sensibility and envisioning judgements to sentences that are related, afforded, or non-afforded showing a preference for afforded and related conditions; LSA shows only a relatedness effect. They conclude that high-dimensional models are crippled when dealing with novel sentences. We use the HAL and LSA models and respond to their claims with a series of six experiments. We suggest that the symbol grounding issue, as articulated by Glenberg and Robinson, is a red herring and discuss the abilities and limitations of the high-dimensional memory models with respect to modeling sentence comprehension.

2001-05-01

96 Sentences

Frederick Dick & Marty Sereno

+ more

A fundamental challenge for developing children is making productive use of information in their environment, particularly when these cues take relatively abstract forms. One area of protracted development in this regard is children's use of sentential cues to agency, such as word order and agreement morphology. Here, we use data from a sentence interpretation task to trace the costs and benefits of informational cue use, with a special emphasis on the effects of brain damage or learning impairments on language development. We compare these data to those from similar experiments on normal adults and aphasic patients, and relate them to a frequency- and processing-based account of language processing skills.

2001-04-24

Aging and individual differences in auditory sentence processing

Kara Federmeier

+ more

The contents and organization of semantic memory seem to remain relatively intact over the adult life-span, but less is known about how such information is accessed and used in real-time during language processing. In this talk I will present event-related potential (ERP) data collected while younger (20-30 years old) and older (60-75 years old) adults listened to pairs of sentences (as continuous, natural speech) for comprehension. The sentence contexts varied in their constraint and ended with either (1) the word most expected in the context ("expected exemplar"), (2) an unexpected word from the expected semantic category ("within category violation"), or (3) an unexpected word from a different semantic category ("between category violation"). Data from younger subjects replicated that previously observed for word by word reading with the same materials. The observed pattern suggests that the younger subjects actively use context information to prepare for the processing of likely upcoming stimuli (i.e., to predict).

In contrast, older adults' data patterned with plausibility and did not show strong effects of sentential constraint. Older adults clearly comprehend the sentences, but seem to use predictive context information less effectively. A subset of older adults, however, showed the younger response pattern, and the tendency to do so was highly correlated with several neuropsychological measures. Thus, resource availability may off-set certain age-related changes in how semantic memory is accessed during sentence processing.

2001-04-17

Comparing Lexical Access for Nouns & Verbs in a Picture Naming Task

Elizabeth Bates

+ more

Most of what we currently know about word recognition and retrieval is based on the study of English nouns (usually concrete, monosyllabic English nouns). There has, however, been a recent resurgence of interest in the effects of form class (content words vs. function words; nouns vs. verbs) on lexical access, within and across languages. Questions and controversies about the differential processing of nouns and verbs have come up simultaneously in at least four areas: (1) potential dissociations between noun and verb access in aphasic patients; (2) functional brain imaging studies indicating partial dissociations in the neural regions that mediate nouns vs. verbs; (3) cross-linguistic studies of early child language that challenge the long-standing assumption that nouns are always acquired before verbs, and (4) real-time processing studies of noun vs. verb access, inside and outside of a phrase or sentence context. Our group has been working in all of these areas, and to serve these disparate goals, we have undertaken a large norming study comparing lexical access for concrete nouns vs. verbs. Although these studies are being conducted in several languages and modalities, our largest initiative to date has been a comparative study of action vs. object naming. In this presentation, I will give an overview of our preliminary results for action vs. object naming in English, based on 520 black-and-white pictures of everyday objects and 275 black-and-white drawings of concrete transitive and intransitive actions. Dependent variables include percent name agreement (for each item, percent of subjects who produced the dominant response, also called the "target name"), reaction time to produce the dominant response, and number of alternative names provided by the 100 subjects who participated in the study (50 subjects for object naming, another 50 for action naming). Independent variables include objective visual complexity of the pictures (based on JPG file size), and several attributes of the dominant response that are potential predictors of naming behavior, including objective age of acquisition (based on the MacArthur Communicative Development Inventories), log natural frequency, length, initial frication, word complexity and homophony (i.e. whether the same target name was given for two or more stimuli).

The most important result to date is an unhappy one for those of us who would like to compare action- and object-naming using items that are matched for difficulty on all relevant parameters: IT CANNOT BE DONE. Action naming is harder than object naming no matter what we do, and a match on one dimension invariably leads to a serious mismatch on another. Overall, action naming elicits significantly lower agreement, more alternative names, and slower RTs for the dominant/target name. Action vs. object names are also significantly different on virtually all of the independent or predictor variables -- although this difference does not always favor nouns. Not surprisingly, action pictures are significantly more complex (on average) than object pictures, and action names tend to be acquired significantly later than object names. However, action names are also significantly shorter, less complex and more frequent than object names, factors that should (in principle) make them easier to access. Correlational and regression analyses show that action and object naming are also influenced by somewhat different variables -- sometimes in opposite directions. For example, when all other predictors are controlled, frequency is associated with faster reaction times for object naming but slower reaction times for action naming. Some potential explanations for these paradoxical results will be offered, revolving around the strategies that subjects use to deal with the special problem of drawing inferences about action from a static picture. Although these results may seem very technical (and far removed from the interests of linguists and psycholinguists), they have implications for many different research areas (e.g. the four cited above) and for competing theories of the mental/neural representations that underlie nouns and verbs.

2001-04-10

Central Bottleneck Influences on the Processing Stages of Word Production

Vic Ferreira

+ more

When producing a word, a speaker proceeds through the stages of lemma selection, phonological word-form selection, and phoneme selection. We assessed whether processing at each of these levels delays processing in a concurrently performed task. Subjects named line-drawn pictures as they performed a three-tone auditory discrimination task. In Experiment 1, subjects named pictures after cloze sentences; lemma selection was manipulated with high- and low-constraint cloze sentences, and phonological word-form selection with pictures that had high- and low-frequency names. In Experiment 2, subjects named pictures while ignoring visually presented distractor words; lemma selection was manipulated with conceptually related distractors and phoneme selection with phonologically related distractors.The lemma selection manipulations in both experiments affected tone discrimination response times as much as picture naming response times, as did the phonological word-form selection manipulation in Experiment 1. However, the phoneme selection manipulation in Experiment 2 affected only picture naming times. The results suggest that lemma selection and phonological word-form selection give rise to bottleneck effects, delaying processing in concurrently performed tasks, while phoneme selection does not.

2001-03-13

Constructing Inferences in Text Comprehension

Murray Singer University of Manitoba

+ more

Text inference processes are explored in the framework of a constructionist theory. Three assumptions of constructionism are that: (a) readers maintain coherence at multiple levels of text representation; (b) readers access possible causes of outcomes described in text; and (c) the reader's goal regulates text processing. Two sets of experiments are described that contrast constructionism with competing theories. Alternate approaches for simulating these effects are outlined.

2001-03-06

Lexically specific constructions in the acquisition of inflection in English

Stephen Wilson UCLA, Department of Linguistics

+ more

Children learning English often omit grammatical words and morphemes, but there is still much debate over exactly why and in what contexts they do so. This talk presents the results of a study investigating the acquisition of three elements which instantiate the grammatical category of "inflection" -- copula 'be', auxiliary 'be' and 3sg present agreement -- in longitudinal transcripts from five children. The aim is to determine whether inflection emerges as a unitary category, as predicted by recent generative accounts, or whether it develops in a more piecemeal fashion, consistent with constructivist accounts. It was found that the relative pace of development of the three morphemes studied varies significantly from child to child, suggesting that they do not depend on a unitary underlying category. Furthermore, early on, 'be' is often used primarily with particular closed-class subjects, suggesting that forms such as 'he's' and 'that's' are learned as lexically specific constructions. These findings are argued to support the idea that children learn "inflection" (and by hypothesis, other functional categories) not by filling in pre-specified slots in an innate structure, but by learning some specific constructions involving particular lexical items, before going on to gradually abstract more general construction types.

2001-02-27

Developmental changes in sentence processing: electrophysiological responses to semantic and syntactic anomalies in 3 to 4 year old children and adults

Debbie Mills & Melissa A. Schweisguth University of California San Diego

+ more

These studies examine the development of cerebral specializations for semantic and syntactic processing in young children and adults. The ERP technique is especially well suited for studying these issues. In normal adults, semantic and syntactic processes elicit distinct patterns of ERPs that differ in timing, morphology and distribution. The characteristic patterns of ERPs have been taken as evidence that these different linguistic processes are subserved by distinct neural systems. Our approach has been to study developmental changes in the brain¹s response to single words (6 to 36 months) and in simple sentences (3 to 4 years). These studies address several questions: a) whether different neural systems mediate semantic and syntactic processing from an early age, b) to establish the developmental trajectories for these systems and how they change as a function of language development , and c) how lexical development influences and interacts with grammatical development.

Today¹s talk will focus on semantic and syntactic violations in auditory sentence processing in young children and adults. ERPs were collected as participants heard a total of 160 sentences, half with sentence-medial violations, and half controls: e.g. semantic anomaly: "When Justin is thirsty, he drinks teddy bears or soda." and word order (syntactic) violation: "When Justin is thirsty, he water drinks or soda." Children were administered a series of behavioral language tasks prior to the ERP visit. Participants were also asked to judge a subset of the sentences during ERP testing. We will present data from 16 typically developing children (11 females) mean age of 4 years (3.29-4.83) and 19 adults (all right-handed monolingual English speakers). In adults, semantic anomalies elicited a typical N400 response. In children, semantic anomalies elicited the expected late bilateral posterior negativity but also elicited an earlier bilateral anterior positivity. In adults, violations of word order elicited a posterior positive component, P600. In children, ERPs to order violations elicited an N400 response and an anterior positive response much like the pattern observed to semantic violations. We also explored patterns of activity to different types of semantic and order violations. The results were interpreted as being consistent with the hypothesis that in early language development similar neural systems subserve semantic and syntactic processing and that cerebral specializations for different subsystems develop through experience with language.

2001-02-20

The Neural Basis of Predicate-Argument Structure

James R Hurford University of Edinburgh

+ more

The mental representations of pre-linguistic creatures could not have contained individual constants, i.e. terms guaranteed to denote particular individual objects. Hence, representations of the form PREDICATE(x), where `x' is an individual variable, seem appropriate.

Research on vision (and, less, on audition) has discerned in primates and humans two largely independent neural pathways; one locates objects in a body-centered spatial map, the other attributes properties, such as colour and movement, to objects. In vision these are the dorsal and the ventral pathways. In audition, there are similarly separable `where' and `what' pathways. The evidence comes from lesion studies on monkeys, performance testing and imaging studies on normal and pathological subjects, and psychological testing of normal subects on diagnostic tasks.

The brain computes actions using a very small number of `deictic' or `indexical' variables pointing to particular objects in the immediate scene. Parallels exist between such non-linguistic variables and the deictic devices of languages. Indexicality and reference have linguistic and non-linguistic (e.g. visual) versions, sharing the concept of ATTENTION to an object. The individual variables, x, y, z, of logical formulae can be interpreted as corresponding to these mental variables. In computing action, the deictic variables are linked with relatively permanent `semantic' information about the objects in the scene at hand. Such information corresponds to logical predicates.

PREDICATE(x) is a schematic representation of the brain's integration of two broadly separable processes. One process is the rapid delivery by the senses (visual and/or auditory) of the spatial location of a referent object relative to the body, represented in parietal cortex. The eyes, head, body and hands can be oriented to the referent object, which instantiates a mental variable. The other process is the slower analysis of the delivered referent by the perceptual (visual or auditory) recognition subsystems in terms of its properties.

Mental scene-descriptions are necessary for carrying out the practical tasks of primates, and therefore pre-exist language phylogenetically. The type of scene-descriptions used by non-human primates would be reused for more complex cognitive, and ultimately linguistic, purposes. The provision by the brain's sensory/perceptual systems of a pool of about four variables for ad hoc assignment to objects in the accessible environment, and the separate processes of perceptual categorization of the objects so identified, constitutes a preadaptive platform on which an early system for the linguistic description of scenes developed. This system was based on conjunctions of propositions of the form PREDICATE(x), involving up to about four different variables. An example of such a scene-description might be: APE(x) & STICK(y) & MOUND(z) & HOLE(w) & IN(w,z) & PUT(x,y,w)

2001-02-06

Don Robin San Diego State University

+ more

This talk will review studies on apraxia of speech in our laboratory that focus on nonspeech motor control of the articulators. The work is designed to shed light on the underlying impairment in apraxia of speech and to provide insight into possible treatments for this devistating speech disorder. Three studies will be reviewed that point to a disorder of motor programming as the cause of apraxia of speech. In addition, preliminary work on a principled approach to treating the disorder that stems from our basic studies will be presented.

2001-01-30

Comparing Reading And Auditory Comprehension In Aphasia

Jelena Jovanovic Department of Cognitive Science, University of California, San Diego

+ more

In this talk, I will review the basic tenets of the classical (Wernicke-Geschwind) model of language processing, and offer examples of recent findings in aphasia research that are *not* accounted for by this model. I will then present my own research findings, which address several unexplored questions about reading and auditory comprehension in aphasia, and I will discuss how they contribute to the explansion and modification of the classical model. The questions I asked are: 1. What is the relationship between reading and auditory comprehension in aphasic patients? 2. Can relative performance in reading and auditory comprehension be related to (a) aphasia type and (b) lesion location?

I evaluated these factors in 78 right-handed, single left-hemisphere stroke patients. Reading and auditory comprehension scores, as well as aphasia type, were assessed by the Western Aphasia Battery. Scores were compared across all patients, then clustered to reveal patients with comprehension advantage in one modality. Brain lesion sites were revealed by MRI. To determine common lesioned areas in patients with a modality-specific comprehension advantage, lesion sites were standardized and overlapped. My results reveal a trend toward poorer reading comprehension across aphasics, with notable exceptions. Broca's aphasics appear to have the worst reading comprehension relative to their auditory comprehension. Wernicke's aphasics show the opposite pattern: in most cases, aphasics of this type have a slight reading comprehension advantage. I conclude that reading and auditory comprehension may be differentially affected in aphasia, and in notable patterns across aphasia types. Lesion analysis revealed a small region of inferior motor cortex spared in patients with better reading comprehension, but lesioned in almost all with auditory comprehension advantage. This result supports the possibility that motor-articulatory processing contributes to reading more than to auditory comprehension.

2001-01-23

The development of long-term explicit memory in infancy: Brain and behavioral measures.

Leslie Carver Center on Human Development and Disability at the University of Washington

+ more

The ability to remember information about the past is hypothesized to emerge in the second half of the first year of life in human infants. Although there is substantial information from both cognitive neuroscience and behavioral psychology to support this hypothesis, there is little direct evidence with which the question can be addressed. Using deferred imitation and event-related potentials (ERP), infants' memory abilities were tested between the ages of 9 and 16 months. The results of several studies indicate that there are important developments in the ability to recall information over very long delays at the end of the first year of life. Results from behavioral studies indicate that infants can recall progressively more information about the order of events over progressively long delay intervals near the end of the first year. Results from ERP studies show that these behavioral changes occur concomitantly with developments on the neurophysiological level. Furthermore, the evidence suggests that it is retrieval of information, rather than encoding, that develops. These results support the idea that the emergence of connections between medial temporal lobe structures thought to be involved encoding and storage of information and the prefrontal areas thought to be important for retrieval of order information over the very long term marks an important event in the emergence of long-term explicit memory ability. These results support the contention that the explicit memory system is emergent near the end of the first year of life.

2001-01-16

Putting Language Back in the Body: The Influence of Nonverbal Action on Language Production and Comprehension.

Spencer Kelly

+ more

In my talk, I theorize that the human capacity for language evolved within a rich and structured matrix of bodily action. I hypothesize that if bodily action did indeed play a foundational role in the emergence of language over evolution, those effects may continue to have a powerful impact on how people use language in the present. Specifically, I examine the role that bodily actions plays in language processing and development on three levels of analysis: cognitive, neurological, and social. On the cognitive level, I will first talk about how nonverbal actions combine with speech to not only help make communication clearer for listeners, but also to help speakers think. On the neurological level, I will discuss how different actions influence how the brain processes low-level speech information from one moment to the next. Finally, on the social level, I will argue that nonverbal actions play an important role in how people understand others intentions Throughout my talk, I approach these issues from two developmental timeframes: moment to moment and ontogenetic.

2000-11-21

The Early Word Catches the Weights: Age of acquisition effects in Connectionist networks.

Gary Cottrell Department of Computer Science and Engineering, UCSD

+ more

The strong correlation between the frequency of words and their naming latency has been well documented. However, as early as 1973, the Age of Acquisition (AoA) of a word was alleged to be the actual variable of interest, but these studies seem to have been ignored in most of the literature. Recently, there has been a resurgence of interest in AoA. While some studies have shown that frequency has no effect when AoA is controlled for, more recent studies have found an independent contribution of frequency and AoA. Connectionist models have repeatedly shown strong effects of frequency, but little attention has been paid to whether they can also show AoA effects. Indeed, several researchers have explicitly claimed that they cannot show AoA effects.

In this work, we explore these claims using a simple feed forward neural network. We find a strong relationship between the epoch in which a pattern is acquired (measured AoA) and final error on a pattern. We find this in a range of mapping tasks, from consistent mappings (identity mapping), similar to orthography to phonology, to arbitrary mappings (random mappings), similar to object naming. In almost all cases, there is also a contribution of frequency. In a simulation of a reading task, we find the standard frequencyXconcistency interaction, mirrored by an AoAXconsistency interaction. We also have begun to investigate the properties that cause to some patterns to be acquired earlier or later than others.

This is joint work with Mark Smith and Karen Anderson.

2000-11-14

ERP Study on the Processing of Filler-Gap Dependencies in Japanese Scrambling

Mieko Ueno Department of Linguistics, UCSD

+ more

This experiment investigated the processing of so-called "scrambled" sentences in Japanese. Scrambling of sentence constituents is a common phenomenon in many of the world's languages, particularly those with rich case-marking systems (e.g. Latin). Japanese has canonical subject-object-verb (SOV) word order; in this event-related brain potential (ERP) study, direct objects were displaced from their canonical position preceding the verb to a position further to the left preceding the subject, resulting in OSV word order. Some of these direct objects were demonstrative pronouns (e.g. `this', `that'), while others were interrogative pronouns, so-called "wh"-words (e.g. `who' and `what').

The first question that motivated this study was whether such "scrambling" of sentence constituents would have the same kinds of processing effects as the formation of wh-questions and relative clauses in SVO languages like English. Wh-questions and relative clauses are analyzed in similar ways in linguistic theory: question words and relative pronouns must both occur clause-initially, and they share other syntactic properties as well. Previous ERP studies have shown that holding a displaced constituent like a question word or relative pronoun (filler) in working memory until it is assigned to its canonical position (gap) elicits slow anterior negative potentials across the sentence, and that assigning the filler to the gap elicits left anterior negativity between 300 and 500 msec to the word following the gap (Kluender and Kutas 1993; King and Kutas 1995, among others). I tested whether such filler-gap ERP effects would be elicited by scrambled sentences in Japanese as well. Unlike English wh-words, Japanese wh-words usually remain in their canonical position just like non-wh constituents (although wh-words can also be scrambled just like any other constituent); this is referred to in linguistics as "wh-in-situ", and is a common pattern for asking questions across the world's languages. The second question was then whether there would be any evidence of processing specific to this pattern of wh-in-situ in Japanese.

Stimulus sentences were mono-clausal questions with wh- and demonstrative pronouns either "scrambled" (preceding the subject) or "in-situ" (following the subject and preceding the verb), as shown in English gloss below (ACC=accusative case, NOM=nominative case).

The local newspaper-to according

[what-ACC/that-ACC] the reckless adventurer-NOM finally [what-ACC/that-ACC]

discovered-Q(UESTION)

`According to the local newspaper, did the reckless adventurer finally discover what/that?'

Filler sentences manipulated the sentence position of scrambled elements, case-marking, and number of clauses to prevent strategic processing of stimulus sentences.

The results basically replicated the ERP effects in response to constituent displacement in wh-questions and relative clauses in English: slow anterior negative potentials between scrambled constituents and their gaps, and left anterior negativity between 300 and 600 msec at post-gap positions. In addition, both scrambled and in-situ wh-sentences elicited phasic right anterior negativity between 300 and 600 msec to the verb+question particle (Q) position at sentence end. This suggests increased processing load for both scrambled and in-situ wh-sentences compared to their non-wh counterparts. This may be because Japanese wh-words require a question particle (Q) attached to the final verb, and this requirement may create another type of dependency between the wh-word and the final question particle.

I conclude by discussing how these results might map onto current models of Japanese sentence processing.

2000-11-07

What's Wrong With The Autistic Brain And Why Can't Development Plasticity Take Care Of It?

Axel Mueller University of California San Diego

+ more

Behavioral and, more recently, neuroimaging studies have demonstrated the remarkable potential of the developing brain to reorganize following insult. There is now general consensus that the developmental disorder of autism requires explanation on the neurobiological level (rather than, as previously thought, in experiential terms). Even though etiological mechanisms and neural loci of abnormality in autism are not fully established, it is clear that these abnormalities have an early (intrauterine or postnatal) onset. This raises the question why compensatory mechanisms at work following gross structural lesion are less effective (or even absent) in developmental disorders such as autism, which almost always results in lifelong cognitive impairment. I will present some recent neuroimaging studies suggesting abnormal neurofunctional maps in autism. Conventional procedures of groupwise analyses in "normalized" space may partially mask the biological bases of these findings. Very few studies have examined activation patterns in autism on the single-case level. First findings suggest that individual variation of neurofunctional organization may be abnormally pronounced, potentially reflecting diversity of etiological pathways. Activation maps in the autistic brain have been found to be unusually scattered. This may relate to suspected disturbance of neural growth regulation observed in structural studies. Lack of compensatory reorganization can be attributed to the diffuse nature of these disturbances.

2000-10-31

"Metaphor and the Space Structuring Model"

Seana Coulson University of California, San Diego

+ more

In this talk we outline the meaning construction operations involved in metaphor comprehension, and assess the claim that the right hemisphere (RH) is specialized for this sort of nonliteral processing. The focus is on the contrasting predictions about on-line comprehension of metaphoric language made by two models of high-level language processing. One model is the the standard pragmatic model (Grice, 1975), which posits distinct mechanisms for literal and nonliteral language processing. The other model is the space structuring model, and is based on the theory of conceptual integration, also known as blending (Coulson, in press; Fauconnier & Turner, 1998). In the space structuring model, literal and nonliteral comprehension both proceed via the construction of simple cognitive models and the establishment of various sorts of mappings, or systematic correspondences between elements and relations in each.

Experiments addressed three issues: (i) whether there is a qualitative difference in the processing of metaphors and more literal language; (ii) whether the continuum of metaphoricity described above predicted on-line comprehension difficulty; and, (iii) whether the right hemisphere is specialized for metaphor processing. Results suggest that though the comprehension of metaphors is more effortful than the comprehension of literal language, the same neural resources are recruited for the construction of both sorts of meanings. Further, evidence from event-related brain potentials support a role for the right hemisphere in metaphor comprehension, but argue against the suggestion that right hemisphere semantic representations are somehow specialized for metaphor comprehension.

2000-10-24

"Inflectional Morphology and the Activation of Thematic Role Concepts"

Todd R. Ferretti University of California, San Diego

+ more

According to most linguistic and psycholinguistic theories the assignment of a verb's thematic roles to nouns in sentences is crucial for sentence comprehension. However, despite this consensus there has been relatively little research that has investigated how detailed the conceptual information is that becomes available when verbs are read or heard. The present research addresses this issue in two ways. First, in a series of single-word priming experiments I demonstrate that verbs immediately activate knowledge of typical agents (arresting-cop), patients (arresting-criminal), and instruments (stirred-spoon). The second part of this research extended these results by investigating how people combine morpho-syntactic information (e.g., aspect) with world knowledge of events when they read verbs and noun phrases in isolation. In one experiment, subjects read verb phrases presented for a brief duration, that were marked with either imperfect (was verbing) or perfect aspect (had verbed). They then named visually presented targets that were typical locations (was skating - arena). Typical locations of events were more highly activated when the verbs referenced the situations as ongoing (imperfective) versus completed (perfect). The final experiment examined how people integrate world knowledge of agents and patients in specific events with the aspectual properties of present and past participles to constrain interpretation of isolated phrases such as arresting cop and arrested crook. An implemented competition model was used to generate predictions about how people interpret these types of phrases. The model correctly predicted that subjects combined typical patients more easily with past participles (arrested crook) than with present participles (arresting crook). Interestingly, they often interpreted phrases like arresting crook as verb phrases when the head noun was a great patient / terrible agent. Furthermore, subjects combined typical agents with present participles (arresting cop) more easily than with past participles (arrested cop). Thus the activation of world knowledge of event participants is modulated by grammatical morphemes, and people equally weight these sources of information when combining them to constrain thematic role assignment during on-line interpretation of phrases.

2000-10-17

"A Connectionist Model of Spatial Knowledge Acquisition"

Paul Munro University of Pittsburgh

+ more

Representations of spatial location as measured by priming studies have shown dependencies on both spatial proximity in the environment and temporal contiguity during acquisition. We have simulated these results using a feed-forward network that is trained to make temporal associations over an external pattern space that has intrinsic spatial structure. The hidden unit representations develop similarity properties that capture properties from both the time and space domains. The relative influence of temporal and spatial structure on the internal representations is seen to change over the course of learning. This leads to the prediction that spatial similarity should show an initial dominance that is eventually superceded by simlarity in the temporal domain.

2000-10-10

"Plausibility and Grammatical Agreement"

Robert Thornton

+ more

One of the central divisions in research on language processing is between theories of comprehension and production. These fields have developed largely independently, with little theoretical overlap even when dealing with the same phenomena. Four experiments were conducted to examine production/comprehension overlap by investigating the role of a probabilistic semantic factor, the plausibility of subject-verb relationships, on subject-verb agreement in English. In the production task, a verb was presented visually, followed by the auditory presentation of a sentence preamble. Participants were asked to create a complete passive sentence beginning with the preamble followed by the verb and whatever ending came to mind. The preamble contained two nouns (e.g., "the report about the senators"). The plausibility of the verb was manipulated so that either (a) both nouns could be plausible subjects (e.g., "was seen", as both reports and senators can plausibly be seen) or (b) only the subject noun could be a plausible subject (e.g., "was photocopied", as only reports can plausibly be photocopied). The comprehension task was a self-paced reading using the same materials. The results from both methodologies demonstrated robust effects of plausibility. For production, participants made significantly more agreement errors when both nouns were plausible than when only the subject was plausible. For comprehension, participants spent significantly more time reading the verb when the both nouns were plausible than when only the subject was plausible. These results will be discussed in terms of the overlap between methodologies, as well as their implications for current production models. A distributional account will be proposed that is motivated by current models of comprehension and is consistent with other recent production data.

2000-10-03

"Dorsal And Ventral Pathways In Speech And Language Processing"

Gregory Hickok

+ more

The functional neuroanatomy of speech perception has been difficult to characterize. Part of the difficulty, we suggest, stems from the fact that the neural systems supporting "speech perception" vary as a function of task. Specifically, the set of cognitive and neural systems involved in performing traditional laboratory speech perception tasks, such as discrimination or identification, are not necessarily the same as those involved in speech perception as it occurs during natural language comprehension. Based on a review of data from a range of methodological approaches, and two new experiments, we propose that auditory cortical fields in the posterior half of the superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. Tasks which require access to the mental lexicon (i.e., accessing meaning-based representations) rely on a ventral pathway in which auditory-speech representations are mapped onto meaning; tasks which require explicit access to speech segments rely on a dorsal pathway which interfaces auditory- and articulatory-based representations of speech. We propose that the dorsal, auditory-motor interface system is critical for speech development and also subserves phonological working memory in the adult. We'll also discuss how this model can account for clinical aphasic syndromes.

2000-06-06

"Do Children Have Specialized Word Learning Abilities?"

Gedeon Deák

+ more

Evidence that young children learn words at a prodigious rate has led developmental researchers to postulate domain-specific word learning processes. I will give a broad (but informal) overview of these proposals. I will then review evidence for and against the uniqueness of word learning qua induction. The evidence (much of it very recent) implies that general inductive processes can account for the most widely cited findings. Other evidence shows that preschoolers are not precocious in all regards, and *most* of their word learning difficulties are predictable from general conceptual and inductive factors. Preschoolers are, however, sensitive to the unique semantic and distributional properties of natural lexicons, raising interesting (if urensolvable) evolutionary questions.

2000-05-16

Grammatical Gender Modulates Semantic Integration Of A Picture In A Spanish Sentence

Nicole Wicha

+ more

While grammatical gender is widespread across the world's languages, its role in processing is poorly understood. Wicha, Bates, Orozco-Figueroa, Reyes, Hernandez and Gavaldón (in preparation) found that gender interacts with semantic information during on-line sentence processing, to facilitate or inhibit picture-naming times in Spanish. The current study uses event-related potentials (ERPs) to further examine the nature and time course of the effect of gender in sentence processing. Native Spanish speakers listened for comprehension to Spanish sentences, wherein one of the nouns was replaced by a line drawing. The object depicted by the drawing was either semantically congruent or incongruent within the sentence context. Additionally, the object's name either agreed or disagreed in gender with that of the preceding determiner (e.g., el, la). Semantically incongruent drawings elicited a classic N400, regardless of gender agreement. ERP amplitude in the N400 region, however, was sensitive to the gender of the determiner, being smaller for mismatches than matches, especially over (pre)frontal sites. There was also an effect of gender expectation on the ERP to the article, with unexpected determiners eliciting a larger (pre)frontal negativity than expected determiners. In sum, gender and semantic information both influenced a picture's integration with a sentence's meaning, primarily over frontal regions, albeit in different ways. Listeners thus do use gender information even from articles to comprehend sentences.

Presented at the Annual Cognitive Neuroscience Society Meeting, San Francisco, CA, on April 9-11, 2000.

2000-05-09

Reasons, Persons and Cyborgs

Andy Clark
(guest lecture; in CSB 003)

+ more

The scientific image of the nature of human reason is in a state of flux. Insights from Cognitive Psychology, Artificial Neural Networks, Neuroscience, Cognitive Anthropology and Robotics are converging on a model of human reason in which reliable environmental context, inorganic props and tools, emotional responses and (other) so-called 'fast and frugal' heuristics all play pivotal roles in the mediation of effective adaptive response. Moving in the space of reasons, it increasingly seems, is as much about moving in the space of objects as in the space of ideas. Embodied action is part and parcel of the mechanism of reason itself. The cognitive architecture that makes us what we are involves heterogeneous, shifting webs of structure and process which criss-cross the (cognitively marginal) boundaries of the squishy biological organism.

2000-05-02

"In search of ... the lexicon"

Seana Coulson & Kara Federmeier

+ more

We review results from a series of studies that examine electrophysiological measures of lexical processing in various sorts of linguistic contexts. These findings suggest serious inadequacies in psycholinguists' conception of the lexicon.

2000-04-25

Barbara Conboy

"Cerebral Organization For Word Processing In Bilingual Toddlers"

+ more

Throughout the history of research in bilingualism, a prevailing theme has been the question of whether two languages within the same individual are mediated by the same or different neural systems. Within-subject differences in organization of the neural systems mediating each language have been thought to be influenced by experience with each language (i.e., relative language proficiency) and/or the age of acquisition of the second language (L2). Recent fMRI, PET and ERP studies with highly-proficient bilingual adults have indicated that the organization of neural systems involved in the lexical-semantic processing of each language is linked to subjects' language proficiency and frequency of use of each language rather than the age at which the L2 was acquired. The present study explored the effects of language experience on how children raised in bilingual environments process words in each of their languages. Event-related potentials (ERPs) to known and unknown words in each language (English and Spanish) were recorded in a group of 20-22 month-old children who had regular exposure to both languages. Within-language comparisons examined the neural activity elicited by each word type over eight electrode sites in each language. Between-language comparisons examined ERP differences to known-unknown words in the dominant and non-dominant languages. Results indicated ERP patterns that were linked to language experience. Early, focally-distributed differences to known-unknown words were found for the dominant but not the non-dominant language. Later differences were found in both languages, however they were more focally distributed for the dominant than for the non-dominant language. These findings underscore the role of language experience in establishing specialization for language processing.

1999-11-16

Instantionating Hierarchical Smenatic Relationships in a Connectionist Model of Semantic Memory

George S. Cree & Ken McRae University of Western Ontario

+ more

Past models of semantic memory have transparently represented hierarchical relationships as distinct levels of nodes connected by "isa" links. We present a connectionist model in which basic-level (e.g., dog) and superordinate-level (e.g., animal) concepts are represented over the same set of semantic features. Semantic feature production norms were used to derive basic-level representations and category memberships for 181 concepts. The model was trained to compute distributed patterns of semantic features from word forms. Whereas a basic-level word form mapped to a semantic representation in a one-to-one fashion, a superordinate word form was trained by pairing it with each of its exemplars' semantic representations with equal frequency (typicality was not built in). This training scheme mimics the fact that people sometimes refer to an exemplar with its basic-level label, and sometimes with its superordinate label. The model is used to simulate human data from typicality, category verification, and superordinate-examplar priming experiments.