2018 Journée scientifique Scientific Day - CRLEC-UQAM

Page created by Jeremy Morrison
 
CONTINUE READING
2018 Journée scientifique Scientific Day - CRLEC-UQAM
Journée scientifique
Scientific Day

                          2018
                     Recueil de résumés
                      Abstracts booklet

Université du Québec à Montréal
pavillon Sherbrooke

2018/05/14

Organisé par
Organized by
2018/05/14

                                                       Conférence d’honneur
                                                       Distinguished Lecture

                                                         11:00 – 12:00
                                                     Amphithéâtre (SH-2800)

             Early Language Acquisition: Infant Brain Measures Advance Theory. Patricia Kuhl, Co-
             Director, Institute for Learning & Brain Sciences, University of Washington

             Neuroimaging the infant brain is advancing our understanding of the human capacity for language
             and helping to elucidate ‘critical periods’ for language learning early in development. I will describe
             studies and data that test specific hypotheses about the nature of the mechanisms underlying human
             language learning. One hypothesis under test is that early in development, infant learning is ‘gated’
             by the social brain. Our work is leading to the identification of biomarkers that may allow early
             diagnosis of risk for language impairments such as autism and dyslexia. Language is a model system
             for understanding humans’ most advanced capabilities.

                                                                Lunch

                                                           12:00 – 13:00
                                                    Salle polyvalente (SH-4800)

                                                         Séance d’affichage
                                                           Poster session

                                                           13:00 – 16:00
                                                    Salle polyvalente (SH-4800)

01. Norepinephrine enhances vocal learning by promoting             humans acquire speech. We have previously demonstrated that
the sensory encoding of communicative signals. Yining Chen          NE-synthesizing neurons are more active under conditions that
(McGill University), Jon T. Sakata (McGill University)              promote the sensory learning of song. To determine whether NE
                                                                    can enhance auditory learning for vocal learning, we infused NE
Learning the acoustic structure of communicative sounds forms       into an auditory processing area implicated in sensory learning
the basis of vocal learning. Indeed, it is important to memorize    as juvenile songbirds heard song for the first time in their lives.
the sound of a signal before learning the motor commands to         Relative to control birds, birds that were given NE during song
produce the sound. As such, it is critical to understand the        tutoring produced songs as adults that were significantly more
processes underlying auditory learning to reveal mechanisms of      similar to the songs they were tutored with. Furthermore, birds
vocal learning. Catecholamines such as norepinephrine (NE)          given NE during tutoring (but not control birds) showed
have been implicated in auditory learning for vocal learning. We    evidence of song learning within days of tutoring. These findings
investigated the extent to which NE modulated the auditory          indicate that NE significantly and rapidly promotes the sensory
learning of communicative signals (‘songs’) in the zebra finch, a   learning of song in the service of vocal learning and suggests the
songbird that learns its song in a manner that parallels how

Journée scientifique
Scientific Day                                                                                                                       2
2018/05/14

possibility that NE in auditory processing areas could contribute    associate a person with the language they speak? One of our
to speech learning in humans.                                        previous studies suggests that infants are not able to associate a
                                                                     speaker with a language when only hearing the speaker’s voice
02. Machine learning approaches uncover the acoustic bases           (Schott & Byers-Heinlein, 2017). The current follow-up study
of vocal learning biases. Logan James (McGill University),           investigates whether visual cues are required for infants to make
Ronald Davies (McGill University), Jon Sakata (McGill                an association between the speaker and the language they speak.
University)                                                          We use a looking time eye-tracking study to test monolingual
                                                                     and bilingual infants aged 12 and 18 months. This study uses the
Vocal learning (e.g., language acquisition) is sculpted by
                                                                     same familiarization-test paradigm as our previous study, but
biological predispositions. However, revealing the nature of
                                                                     includes visual stimuli (speakers’ faces) in addition to auditory
these predispositions requires controlled experiments and
                                                                     stimuli (speakers’ voices). During familiarization, participants
computational approaches. Songbirds like the zebra finch offer a
                                                                     watch video clips of speakers telling a story in either English or
powerful opportunity to reveal the nature of vocal learning
                                                                     French. At test, the speakers either continue using the same
biases. By tutoring naïve zebra finches with random sequences
                                                                     language (same trials) or switch to the other language (switch
of species-typical vocal elements, we have previously
                                                                     trials). We predict that visual cues will facilitate infants’ ability
documented biases in how zebra finches sequence their acoustic
                                                                     to associate a speaker with a language. Thus, participants should
elements (i.e., positional biases). For instance, the last element
                                                                     notice the language change and look longer to switch trials than
in the song tended to be longer than other elements in the song
                                                                     to same trials when hearing and seeing the speaker. Data
(similar to phrase-final lengthening in human speech), and
                                                                     collection for this study is ongoing. These results will be relevant
elements in the middle of song tended to be higher in pitch
                                                                     for bilingual caregivers adopting a one-person-one-language
(similar to harmonic arch structure for musical phrases). To
                                                                     approach, as speaking to infants exclusively in one language
further reveal the nature of learning biases, we employed
                                                                     would only be particularly beneficial if infants are able to
machine learning algorithms to classify acoustic elements based
                                                                     associate each caregiver with a specific language.
on their position within the song (i.e., beginning, middle, end)
and to determine which acoustic features (duration, pitch,           04. Daddy speaks English, Maman parle français: Do
amplitude, and various measures of entropy) provided the most        bilingual and monolingual infants form associations between
predictive information about positional variation. Using random      a person and the language they speak? Esther Schott
forest algorithms, we found that duration provided the most          (Concordia University), Krista Byers-Heinlein (Concordia
predictive information about position in song. Pitch and             University)
amplitude, followed by various measures of entropy, were
secondary to duration and provided additional predictive             Do infants form associations between a person and the language
information about position. A clustering algorithm also revealed     they speak? Advocates of the one-person-one-language
two distinct groups with different weights across features. These    approach argue that this skill could help bilingual infants acquire
data reveal the potential for machine learning algorithms to         their languages (Doepke, Australian Review of Applied
uncover the structure of learned vocal signals, and we are           Linguistics, 1998). We used a looking-time paradigm to test
extending these machine learning analyses to investigate “rules”     whether 5-month-old (N = 38) and 12-month-old (N = 50)
underlying vocal sequencing across vocal learning species.           infants notice when a speaker switches the language they speak.
                                                                     Monolingual (English or French) and bilingual (English-French)
03. “Hi baby”, “Allô Bébé”: Does infants’ ability to associate       infants were familiarized to a woman speaking English and a
a speaker with a language require visual cues? Maria Tamayo          man speaking French (or vice-versa) while seeing a constant
(Concordia University), Esther Schott (Concordia University),        visual stimulus (flowers). At test, the voices either continued
Krista Byers-Heinlein (Concordia University)                         speaking the same language (same trials) or switched to the other
                                                                     language (switch trials). If infants notice the switch, they are
Infants can distinguish between male and female voices
                                                                     expected to look longer during the switch trials than same trials.
(Johnson et al., 2011) and between different languages (Byers-
                                                                     We found no evidence that 5-month-olds (p = .86) or 12-month-
Heinlein et al., 2010), but can they combine these two skills to
                                                                     olds (p = .54) noticed the switch. Moreover, there was no

Journée scientifique
Scientific Day                                                                                                                         3
2018/05/14

interaction with infants’ language background (monolingual vs.        more difficult, and there is a large variability in L2 acquisition,
bilingual; ps > .05). Thus, we found no evidence that infants are     with individuals learning with more or less difficulty depending
keeping track of which person speaks which language. Ongoing          on the age and way of acquisition. Variability in L2 acquisition
research is investigating whether additional cues such as the         is also influenced by between-individual differences in language
presence of a speaker’s face can help infants to form associations    learning abilities, though little is known about the factors that
between a person and the language they speak.                         lead to those differences. The main goal of this project is to
                                                                      identify individual neural biomarkers or predictors associated
05. Do Mothers Listen to Form or Function in Their Infant's           with language learning success. Specifically, we are interested in
Speech? Chu Han Wang (McGill University), Pegah Athari                the white matter tracts connecting the language regions of the
(McGill University), Susan Rvachew (McGill University)                brain to determine how brain plasticity relates to variability in
                                                                      language learning. Monolingual English speakers took an
A social shaping mechanism has been hypothesized to explain
                                                                      intensive 12-week French course. Before and after the course,
early vocal development. Specifically, contingent maternal
                                                                      we obtained anatomical T1 and diffusion MRI data, and
responses to infant utterances increase the ratio of speech-like to
                                                                      behavioural data measuring language proficiency for each
nonspeech-like vocalizations produced by infants in controlled
                                                                      subject. We reconstructed language tracts of the brain using
laboratory experiments (Bloom 1987, 1988; Goldstein, King &
                                                                      improved tractography protocols, to help us identify differences
West, 2003; Goldstein & Schwade, 2008). However, it is not
                                                                      between individuals in anatomical brain connectivity. This
clear that social shaping explains infant vocal behaviour in
                                                                      project has several outcomes that could be relevant to our
natural mother-infant interactions. In a longitudinal study we
                                                                      understanding of L1 or L2 acquisition, and even certain disorders
investigated the relationship between the form and function of
                                                                      involving language. Linking intrinsic anatomical patterns and
infant vocalizations and contingent maternal responses to
                                                                      the ability to acquire an L2 could inform us on neural biomarkers
vocalizations produced by four infants (Infant 1, 6 – 10 mos;
                                                                      predicting L2 learning success or difficulties. This could
Infant 2, 6 – 8 mos; Infant 3, 11 – 14 mos; Infant 4, 11 – 14 mos).
                                                                      eventually be translated into teaching methods for children with
It was hypothesized that mothers would be more likely to
                                                                      learning disabilities or developmental disorders affecting
provide a contingent (as opposed to noncontingent) vocal
                                                                      language, and stroke patients who have some language
response to her infant’s vocalization if it was more speech-like,
                                                                      impairments.
confirming the role of social shaping as a mechanism to increase
speech-like infant vocalizations with age. Contrary to these          07. Bilingualism and Irony Processing: Is There A
expectations, mothers did not reliably respond contingently to        Relationship? Fiona Deodato (McGill University), Mehrgol Tiv
speech-like vocal forms. Instead, they focused on forms that they     (McGill University), Debra Titone (McGill University)
perceived as meaningful (i.e., canonical babbles or other vocals):
Infant 1 2 (1) = 51.102, p < 0.05; Infant 2 2 (1) = 15.789, p         Irony is a common linguistic device that indirectly
< 0.05); Infant 3 2 (1) = 110.418, p < 0.05; Infant 4 2 (1) =         communicates one’s attitude through humor, muted affect, or
223.401, p < 0.05. We conclude that the interactions appear less      insincerity. Despite its ubiquity, little is known about how
like social shaping and more like “mutual creativity and a            bilinguals, who may uniquely exercise similar cognitive
cocreation of novelty” as described by Hsu and Fogel (2001).          capacities as those that support processing of indirectly
                                                                      language, comprehend and process irony. Prior work has
06. Linking Intrinsic Anatomical Connectivity and Second-             suggested that bilingual experience relates to core capacities of
Language Learning Success Using Improved Tractography                 irony processing, such as potential advantages in executive
Reconstruction of Language Pathways. Kaija Sander (McGill             functions and increased social and linguistic awareness due to
University), Elise B. Barbeau (McGill University), Shari Baum         the constant cross language activation and inhibition. Thus we
(McGill University), Michael Petrides (McGill University),            were interested in investigating the extent to which bilingual
Denise Klein (McGill University)                                      second language (L2) experience, specifically global L2
                                                                      proficiency and age of acquisition (AoA), and working memory
Learning a first language occurs easily and naturally at a young
                                                                      (WM) modulate irony comprehension when reading in the first
age. However, learning a second language (L2) at a later age is
                                                                      language (L1). A sample of 48 English-L1 bilinguals read

Journée scientifique
Scientific Day                                                                                                                         4
2018/05/14

positive and negative scenarios and were required to make             relationship between bilingual language experience and sarcasm
sensibility judgments to subsequent literal, ironic or anomalous      use manifests during on-line language processing.
statements as we recorded their reaction time. Results suggest
that 1) bilinguals responded more slowly to ironic vs. literal        09. The Impact of Sentence Context and Visual Speech Cues
statement; but faster to ironic vs. anomalous statements 2)           on Speech Perception under Noisy Conditions in Bilinguals.
bilinguals found an ironic statement following a negative             Alexandre Chauvin (Concordia University), Jean-Louis René
scenario easier to process than following a positive scenario; 3)     (Concordia University), Natalie Phillips (Concordia University)
bilinguals were faster and more easily recognized irony as their
                                                                      Listening environments are rarely pristine; background noise is
global L2 proficiency increased. These results are consistent
                                                                      omnipresent. Nevertheless, most people perceive speech well,
with the idea that irony comprehension is facilitated by cognitive
                                                                      suggesting the existence of mechanisms to support speech
capacities that are promoted by bilingual experience.
                                                                      processing in noise. For example, visual cues (e.g., lip
08. Greater Bilingual Language Experience Predicts Greater            movements) improve speech perception under poor listening
Sarcasm Use in Daily Life. Mehrgol Tiv (McGill University),           conditions. Likewise, semantic and lexical information (i.e.,
Vincent Rouillard (Massachusetts Institute of Technology),            contextual cues) help disambiguate speech. These effects are
Naomi Vingron (McGill University), Sabrina Wiebe (McGill              well documented in native listeners, but little is known about
University), Debra Titone (McGill University)                         them in non-native listeners. In this ongoing study, we are
                                                                      investigating the extent to which bilinguals benefit from visual
We investigated whether bilingual language experience impacts         speech cues and contextual information in their first (L1) and
indirect language use in daily life, particularly with respect to     second language (L2). Participants were presented with
sarcasm. Prior work suggests that language experience promotes        sentences in noise (twelve-talker babble) and asked to repeat the
metalinguistic awareness and executive control, which likely          terminal word of each sentence. Half of the sentences offered
support our ability to engage in indirect communication. Thus,        contextual information (e.g., “In the woods, the hiker saw a
in this study, we predicted more frequent use of sarcasm among        bear.”) while the rest offered little context (e.g., “She had not
bilinguals vs. monolinguals, especially among more fluent             considered the bear.”). The sentences were presented in three
bilinguals. To test these predictions, 116 adults (25 self-reported   modalities: visual, auditory, and audio-visual. Preliminary
monolinguals and 91 self-reported bilinguals) completed the           results with younger adults (Mage = 25.2; Medu = 15.4; ML2
Sarcasm Self-Report (Ivanko, Pexman, & Olineck, 2004) and             fluency = 4.3 / 5) show greater accuracy in L1 and L2 when
Conversational Indirectness Scales (Holtgraves, 1997), along          contextual cues and/or visual speech cues are available,
with a language experience questionnaire. There were three key        suggesting that these sources of information can be used in L2.
results: First, using principle component analysis, we found          We will present current data on young adult and discuss the
components relating to General Sarcasm, Embarrassment                 implications for aging as we prepare to test bilingual older adults.
Diffusion, and Frustration Diffusion, partially replicating Ivanko    We predict that older adults may benefit from both context and
et al. who studied individuals presumed to be monolingual.            visual speech cues to a greater extent than younger adults as this
Second, using these component scores as dependent variables in        may help compensate for sensory decline.
multiple regression analyses for bilingual adults, we found that
components related to increased L2 proficiency (but not L2 age        10. Individual differences in language mixing modulate
of acquisition) predicted greater general sarcasm use (i.e., more     cross-language syntactic activation during natural reading.
proficient L2 users used sarcastic language more frequently).         Naomi Vingron (McGill University), Jason Gullifer (McGill
Finally, we observed overall greater use of sarcasm (based on the     University), Veronica Whitford (University of Texas at El Paso)
same component scores) in bilinguals as a group compared to
                                                                      An open question is whether bilinguals activate non-target
monolinguals as a group, where monolinguals reported
                                                                      syntax during natural reading, and whether differences in L2
comparable sarcasm tendencies as low-medium proficiency
                                                                      experience modulate this activation. English exclusively places
bilinguals. Taken together, these results indicate that greater
                                                                      adjectives before nouns (the red truck), whereas French typically
bilingual language experience is related to greater use of sarcasm
                                                                      places adjectives after nouns (le camion rouge). Here, we
in daily life. We are now investigating whether and how the

Journée scientifique
Scientific Day                                                                                                                          5
2018/05/14

monitored eye movements of 27 French-English bilinguals                for robust cross-language syntactic activation for direct-object
(French=L1) as they read intact English sentences or English           constructions when French-English bilinguals read in their
sentences containing violations consistent/inconsistent with           second language (English), unlike our prior findings for the same
French (He saw the truck red parked on the street. vs. He saw red      participants examining adjective-noun constructions that differ
the truck parked on the street.). GDs on the constructions             across English and French (e.g., the red car, the car red). Thus,
themselves (the red truck) were similar for French-consistent          cross-language activation during bilingual reading may depend
violations and intact sentences, though bilinguals who rarely mix      on the particular type of syntactic construction that conflicts
languages read French-inconsistent violations more slowly than         across languages. Specifically, constructions that cross syntactic
the others. GDs on the post-target region (on the street) were         boundaries (e.g., direct-object constructions) may be more
slower for French-consistent violations for bilinguals who rarely      amenable to cross-language activation than constructions that do
mix languages. Interestingly, TRTs were slower for all bilinguals      not (e.g., adjective-noun constructions), which has implications
reading French-consistent violations vs. intact sentences,             for second language instruction.
suggesting either downstream integrative effort or heightened
salience for cross-language violations. These results suggest that     12. Entrainer les habiletés métamorphologiques par l'éveil
bilinguals indeed access non-target language syntax during             aux langues: conception et évaluation d'un dispositif
natural reading, though the way in which this affects sentence         didactique. Alexandra H. Michaud (Université du Québec à
processing differs based on individual differences in language         Montréal)
background.
                                                                       Plusieurs chercheur·euse·s recommandent aux enseignant·e·s de
11. Bilinguals Show Limited Eye Movement Evidence of                   français langue d’enseignement de prendre des mesures adaptées
Cross-Language Syntactic Activation for Direct-Object                  à l'hétérogénéité linguistique de leur classe (Armand, Beck et
Constructions. Naomi Vingron (McGill University), Ying Ying            Maraillet, 2004; De Pietro, 1999; Candelier, 2003), plutôt que de
Liu (McGill University), Pauline Palma (McGill University),            fonder leur enseignement sur l'idée que tou·te·s ont la langue de
Veronica Whitford (University of Texas at El Paso), Deanna             scolarisation comme langue première (Dabène, 1992; De Pietro,
Friesen (Western University), Debra Jared (Western University),        1999; Perregaux, 1993). À ce sujet, des programmes d'éveil aux
Debra Titone (McGill University)                                       langues (EAL) ont été développés pour éveiller les élèves à la
                                                                       diversité linguistique du monde et légitimer les langues des
Important questions for bilingual reading are whether target           élèves qui n'ont pas la langue d’enseignement comme L1
language syntax is exclusively activated when reading in a             (Candelier, 2003; Armand, Beck et Maraillet, 2004). Un des
particular language, whether syntax from another known                 objectifs de l'EAL est d'ordre métalinguistique. Parmi les
language is co-activated, and how prior language experience            habiletés métalinguistiques susceptibles d'être entrainées avec un
modulates this. Here, we investigate these questions for direct-       programme d’EAL, les habiletés métamorphologiques revêtent
object constructions using eye tracking. In English, verbs always      un intérêt particulier pour le développement des compétences en
precede direct-objects (I love her), while in French this              littératie (Carlisle, 1995 ; Carlisle, 2010, Fejzo, Godard et
precedence only holds when the direct-object is a full noun            Laplante, 2014; Fejzo, 2016). Notre étude poursuit comme
(J’aime Lucy vs. Je l’aime). French-English bilinguals (L1             objectif la conception et l’évaluation d’un dispositif didactique
French: N=27; L1 English: N=57) read English sentences                 d'éveil aux langues pour l'entrainement des habiletés
containing intact (She drives it), French-consistent (She it drives)   métamorphologiques des élèves du deuxième cycle du primaire.
or French-inconsistent direct-object constructions (Drives she         Cette recherche-développement suit une méthodologie inspirée
it).Eye movement measures revealed a bilingual reading cost for        de Van der Maren (1996) qui comprend 1) la définition du
sentences containing direct-object constructions violated in a         problème de recherche, 2) la formulation des objectifs, 3) la
manner either consistent or inconsistent with French in both L1        création du prototype, 4) l'évaluation du prototype et 5) sa
and L2 readers. However, individual differences in language            bonification et la conception de la version finale.
mixing only modulated the reading times for first language
reading (English-native) but not for second language reading
(French-native). Taken together, there was only limited evidence

Journée scientifique
Scientific Day                                                                                                                         6
2018/05/14

13. Does lip reading make speech comprehension easier for           experiment is to contrast two different frameworks of discourse
everyone? Electrophysiological evidence on interindividual          processing in L2. The Capacity Theory of language
differences in younger and older adults with varying                comprehension (Just & Carpenter, 1992) predicts that bilinguals
working memory capacity. Nathalie Giroud (Concordia                 are only able to benefit from discourse-level cues in L2, such as
University), Max Herbert (Concordia University), Natalie            semantic and world knowledge information, if they have the
Phillips (Concordia University)                                     sufficient working memory capacities to do so. In contrast, the
                                                                    Noisy Channel Model (Gibson, Bergen, & Piantadosic, 2013)
For older adults, speech comprehension represents a strong load     predicts that bilinguals will rely mainly on discourse-level cues
on working memory because of the need to rapidly process            in L2 as they are less familiar with the syntax. In the current
auditory cues while maintaining relevant information. The           study, English-French bilinguals listened to short stories in
presence of visual cues (i.e., lip movements) during speech         English and in French while their brain activity was recorded
processing has been shown to enhance speech comprehension in        using electroencephalograms. We examined whether bilinguals
older adults. However, the relation between audiovisual speech      were more sensitive to discourse-level or word-level cues in their
processing and working memory remains unclear. In this EEG          L2 compared to in their first language (L1) by looking at the
study, we investigated the extent to which interindividual          amplitude and latency of the N400 component. Stimulus-locked
differences in audiovisual speech processing may be explained       analyses revealed greater negativities in L2 compared to in L1 (p
by working memory capacity (WMC) in younger (N=32,                  = .012). We interpret this as reflecting the overall greater
M=23.66y) and older adults (N=16, M=73y). The N400 was              processing load in L2. We also found a significant N400 effect
recorded time-locked to the last word of high and low predictable   for the discourse-level cues (p = 0.031) and no significant effects
sentences. The semantic context (SC) effect was quantified as       for the word-level cues, thus lending support to the Noisy
the difference in the N400 magnitude evoked by low minus high       Channel Model.
predictable sentences and assessed separately for an auditory-
only (AO) and an audiovisual condition (AV). All individuals,       15. The Role of Semantic Context in Bilingual Speech
regardless of age or WMC, showed an earlier SC effect (approx.      Perception in Noise: An ERP Investigation. Kristina Coulter
80ms) in the AV condition compared to the AO condition,             (Concordia University), Annie Gilbert (McGill University),
highlighting the faster integration of unpredictable words in       Shanna Kousaie (McGill University), Debra Titone (McGill
sentences with visual cues available. Additionally, we found a      University), Denise Klein (McGill University), Vincent Gracco
stronger SE effect in the AV condition compared to the AO           (McGill University), Natalie Phillips (Concordia University)
condition, but only in individuals with high WMC. Thus, our
data suggest that only a high WMC allows to use visual speech       Although speech perception often occurs in suboptimal listening
cues to create strong predictions about the end of a sentence. We   conditions, semantic context can help individuals perceive
therefore argue that individual differences in WMC may relate       speech under noisy listening conditions in their native language
to qualitatively different aspects of the audiovisual benefit       (L1). However, the extent to which bilinguals can use semantic
during speech comprehension in young and older adults.              context while perceiving speech in noise in their second
                                                                    language (L2) is unclear. Therefore, we used event-related brain
14. Auditory discourse processing in bilinguals: An ERP             potentials (ERPs) and an adapted version of the Revised Speech
analysis. Maude Brisson-Mckenna (Concordia University),             Perception in Noise Task to examine this in 15 English-French,
Angela Grant (Concordia University), Natalie Phillips               19 French-English sequential bilinguals, and 16 simultaneous
(Concordia University)                                              bilinguals. Participants listened to English and French sentences
                                                                    varying in sentential constraint, leading to a predictable or
When communicating in a second language (L2), one may feel          unpredictable final word (e.g., high: “The lion gave an angry
competent at producing and comprehending single words or            roar.”, low: “He is thinking about the roar.”). Participants
phrases, but producing and comprehending extended discourse         repeated the final word of each sentence. Sentences were
is often more challenging. Yet, most psycholinguistic studies       presented in quiet and noise (i.e., a 16-talker babble mask).
have focused on investigating the role of bilingualism on word-     Larger N400 amplitudes were observed in quiet compared to
and sentence-level processing. As such, the goal of our current     noise and in L2 compared to L1. A larger and later N400 was

Journée scientifique
Scientific Day                                                                                                                       7
2018/05/14

observed for low context sentences compared to high context           older adults’ spoken word recognition. Revill & Spieler (2012)
sentences across all groups and in L1 and L2, suggesting more         found that older adults are particularly susceptible to frequency
effortful processing of low compared to high constraint               effects, and will look more to high frequency items compared to
sentences. High constraint sentences elicited larger N400             younger adults. We aim to replicate and extend the findings of
amplitudes in L2 compared to L1 while low constraint sentences        Revill & Spieler (2012) by investigating the role of inhibition
elicited similar N400 amplitudes in L1 and L2. These findings         along with frequency for resolving lexical competition in both
indicate that, although bilinguals benefit from semantic context      older and younger adults. Older (n=16) and younger (n=18)
during speech perception in both their languages, bilinguals          adults completed a visual word paradigm eyetracking task that
appear to benefit from semantic context to a lesser extent in their   used high and low frequency targets paired with competitors of
L2 compared to L1.                                                    opposing frequency, and a Simon task as a measure of inhibition.
                                                                      We find that older adults with poorer inhibition are more
16. Self-reported effect of background music in older adults.         distracted by competitors than those with better inhibition and
Amélie Cloutier (Université de Montréal), Nathalie Gosselin           younger adults. This effect is larger for high frequency
(Université de Montréal), Élise Cournoyer Lemaire (Université         competitors compared to low. These results have implications
de Sherbrooke)                                                        for the changing role of inhibition in resolving lexical
                                                                      competition across the adult lifespan and support the idea that
Background music accompanies our daily activities at all ages,
                                                                      decreased inhibition in older adults contributes to increased
whether of a cognitive nature or not. Car driving, reading,
                                                                      lexical competition and stronger frequency effects in word
cooking and sports are good examples. Moreover, a qualitative
                                                                      recognition.
study reports that many older people testify that music has a
positive impact on their cognition, but also on their mood.           18. Bilingual Experience and Executive Control over the
However, studies investigating the effect of background music         Adult Lifespan: The Role of Biological Sex. Sivaniya
on cognition present heterogeneous results. The main objective        Subramaniapillai (McGill University), Maria Natasha Rajah
of this study is to compare the self-reported effect of background    (McGill University), Stamatoula Pasvanis (Douglas Institute
music on cognition and emotions in the elderly, using a rating        Research Centre), Debra Titone (McGill University)
scale questionnaire. This study also aims to explore the
preferences and musical habits of this population in relation to      We investigated whether bilingual language experience over the
their daily activities. In order to achieve these objectives, 20      lifespan impacts women and men in a manner that differentially
people aged over 60 completed the questionnaire. Preliminary          buffers against age-related declines in executive control. To this
results suggest that background music has a significantly more        end, we investigated whether executive control performance in a
positive impact on emotions compared to cognition. Also, older        lifespan sample of adult women and men were differentially
adults prefer to listen to relaxing and familiar music when they      impacted by individual differences in bilingual language
perform a cognitive activity. However, they prefer to listen to       experience, assessed using an unspeeded measure of executive
stimulating and familiar music with lyrics during non-cognitive       control, the Wisconsin Card Sort Test. The results suggested that
activities. In conclusion, these results seem to converge with the    women showed both the greatest degree of age-related decline
literature on the effect of background music on cognition in          across WCST measures, and a greater likelihood than men to
adolescents and young adults.                                         express improved performance as a function of increased
                                                                      bilingual experience. We consider implications of this finding
17. Inhibitory and Lexical Frequency Effects in Younger and           for advancing our understanding of the relation between
Older Adults’ Spoken Word Recognition. Sarah Colby                    bilingualism and cognition, and also the effects of biological sex
(McGill University), Victoria Poulton (McGill University),            on cognitive aging.
Meghan Clayards (McGill University)

Older adults are known to have more difficulty recognizing
words with dense phonological neighbourhoods (Sommers &
Danielson, 1999), suggesting an increased role of inhibition in

Journée scientifique
Scientific Day                                                                                                                        8
2018/05/14

19. Relationship of pure-tone and speech-in-noise measures           Catherine-Ève Morency (Institut de réadaptation en déficience
of hearing to scores on the Montreal Cognitive Assessment            physique de Québec), Marjolaine Viau (Institut Raymond-
Scale (MoCA): Sex differences and comparisions to other              Dewar, Centre de recherche interdisciplinaire en réadaptation),
neuropsychological test results. Faisal Al-Yawer (Concordia          Sylvie Hébert (Université de Montréal)
University), Halina Bruce (Concordia University), Karen Li
(Concordia University), M. Kathleen Pichora-Fuller (University       Introduction : L’hyperacousie, un problème de tolérance aux
of Toronto), Natalie Phillips (Concordia University)                 sons externes, est une plainte de plus en plus rapportée dans le
                                                                     milieu clinique de l’audiologie. La désensibilisation sonore est
Research shows a relationship between hearing loss (HL) and          l’une des thérapies proposées pour traiter les acouphènes, mais
cognitive decline in older adults. Scores on the Montreal            son efficacité n’a pas été démontrée pour l’hyperacousie lorsque
Cognitive Assessment (MoCA), a brief screening measure, show         cette dernière constitue la plainte principale du patient. Notre
an association with HL. Furthermore, there are sex-related           objectif est de documenter les effets audiologiques et
differences in the prevalence of age-related HL. It is unclear how   psychologiques d’une thérapie par générateurs de bruit chez des
the association between HL and MoCA scores compares to the           patients ayant comme plainte principale l’hyperacousie.
association between HL and more comprehensive                        Méthodologie : Cinq femmes ayant pour plainte principale
neuropsychological testing or if these associations between          l’hyperacousie ont porté des générateurs de bruit bilatéralement
cognitive tests and HL vary by sex. 170 older adults (M=68.94        pendant quatre semaines. L’effet du traitement a été mesuré à
years;61.2% female) were tested. Hearing was assessed using          l’aide du Loudness Contour Test de Cox ou de la mesure des
pure-tone audiometry and the Words in Noise (WIN) test.              seuils d’inconfort, des réflexes stapédiens et des questionnaires
Cognition was assessed using the MoCA, Stroop, and                   GÜF, HADS et WHOQOL-BREF avant l’intervention (Pré-
Repeatable Battery for the Assessment of Neuropsychological          traitement), après deux semaines de traitement, (Mi- traitement),
Status (RBANS). Most participants had good hearing (66.9%            après l’intervention (Post-traitement) et un mois après la fin de
normal hearing (NH) defined as a pure-tone average thresholds        l’intervention (8 semaines). Résultats : Les seuils d’inconforts
at 500, 1000, and 2000Hz in both ears 25/30).           cinq (étendue : 6,25 à 18,75 décibels) et le score au GÜF a
There was no difference in the proportions of individuals passing    diminué en moyenne de 10,8 points pour tous les patients
the MoCA based on hearing (76.5% of NH;68.4% of HL).                 (étendue : 3 à 24 points). Ces résultats suggèrent que le port de
Nevertheless, MoCA scores were correlated with PTA (r=-              générateurs de bruit diminue la sensibilité auditive et la détresse
0.176;p=0.027). When the sample was sex-split, the correlation       qui y est associée. Discussion et conclusion : L’utilisation de
was significant only for women (r=-0.220;p=0.03). This pattern       générateurs de bruit semble être un traitement envisageable pour
remained even when hearing-dependent subtests of the MoCA            les patients ayant pour plainte principale l’hyperacousie.
were excluded (r=-0.223;p=0.028). We observed similar sex-
specific correlations between women’s WIN and Stroop scores          21. The effect of the emotional characteristics of different
(r=-0.365;p=0.061). PTA in all participants was associated with      sound environments on selective attention of adults. Eva
scores on the RBANS memory subtests (r range=0.253-0.335).           Nadon (Université de Montréal), Rebekka Lagace-Cusiac
The present findings highlight sex-differences in a sample of        (Université de Montréal), Jadziah Pilon (Université de
healthy older adults. Implications for cognitive screening and       Montréal), Nathalie Gosselin (Université de Montréal)
sensory-cognitive relationships are discussed.
                                                                     Our daily activities (e.g., cooking or driving) are very often
20. Utilisation des générateurs de bruit pour le traitement de       performed in the presence of sound, which could influence
l’hyperacousie : une série de cas. Charlotte Bigras (Université      selective attention abilities. These capabilities make it possible
de Montréal), Claudia Côté (Institut de réadaptation en              to select relevant information from a large number of distractors
déficience physique de Québec), Stéphane Duval (Institut             when performing specific tasks. The emotional characteristics of
Raymond-Dewar, Centre de recherche interdisciplinaire en             different sound environments (e.g., their valence and arousal) are
réadaptation), Donald Lafrenière (Institut Raymond-Dewar,            important to consider while trying to understand their impact on
Centre de recherche interdisciplinaire en réadaptation),             cognition. The first aim of this preliminary study is to explore

Journée scientifique
Scientific Day                                                                                                                        9
2018/05/14

the effect of emotional characteristics of various sound               visuelle analogique. Les résultats montrent que l’augmentation
environments on selective attention. To do this, neurotypical          de la hauteur, de la durée et de la précision articulatoire lors du
adults performed a Stroop task including congruent trials (e.g.,       passage de neutre à emphase est perçue moins importante chez
RED written in red) and incongruent trials (e.g., RED written in       les enfants TSA. Ces résultats suggèrent que les paramètres
green) in different sound environments varying in valence and          habituellement recrutés dans l’emphase ne sont pas tous
arousal. The response times and error rates of the participants        fonctionnels au niveau perceptif chez les enfants TSA,
will be compared according to the types of trials and the sound        impliquant alors une réduction de l’intelligibilité.
environment. Predictions are that the more pleasant and
stimulating the sound environment is, the better the performance       23. The development of a video-based coding tool to monitor
will be. The second aim is to examine if participants’ listening       response to intervention in school-age children with autism:
habits and preferences while performing cognitive tasks have an        The Autism Behavioural Change Observational Measure.
impact on their performance in different sound environments. To        Melanie Custo Blanch (McGill University), Megha Sharda
do that, participants will answer a questionnaire investigating        (Université de Montréal), Ruveneko Ferdinand (McGill
their habits and listening preferences when conducting cognitive       University), Melissa Tan (Westmount Music Therapy), Krista
tasks.                                                                 Hyde (Université de Montréal), Aparna Nadig (McGill
                                                                       University)
22. Étude des caractéristiques perceptives de l’accent
d’emphase produit par des enfants atteints d’un trouble du             Evaluating effectiveness of behavioural interventions is a
spectre de l’autisme. Camille Vidou (Université du Québec à            challenge in Autism Spectrum Disorders (ASD) research. There
Montréal), Jade Picard-Lauzon (Université du Québec à                  is a lack of tools sensitive to treatment-related change, have
Montréal), Vanessa Moisoni (Université du Québec à Montréal),          applicability across multiple settings and that target school-aged
Paméla Trudeau-Fisette (Université du Québec à Montréal),              children with ASD. The Autism Behavioural Change
Lucile Rapin (Université du Québec à Montréal), Lucie Ménard           Observational Measure (ABCOM), a 27-item video observation-
(Université du Québec à Montréal)                                      based behaviour coding questionnaire, was developed with the
                                                                       intention of evaluating response-to-treatment in school-aged
L’accent d’emphase est produit par une modulation des                  children with ASD. ABCOM gauges autism-relevant outcomes
paramètres acoustiques de F0, de durée et d’intensité dans le but      (shared social interaction, initiation of communication,
de mettre en relief une unité linguistique. Les études ont montré      restrictive repetitive behaviours, etc.) using a 4-point Likert scale
que les enfants atteints d’un trouble du spectre de l’autisme          based on frequency of observed behaviours in a given session
(TSA) éprouvent des difficultés dans la production et la               and may be used in varied naturalistic intervention settings. The
perception de l’accent d’emphase. Cette étude s’intéresse aux          ABCOM was tested with video-recordings from a completed
implications communicatives de ces difficultés en répondant aux        randomized control trial (RCT: ISRCTN26821793) of 6-12
deux objectifs suivants: 1) établir les niveaux d’intelligibilité de   year-old children with ASD who received a music or non-music
l’accent d’emphase produit par des enfants à développement             intervention. Two experienced raters were trained using a
typique (DT) et par des enfants TSA et 2) décrire les                  standardized procedure using 3 example videos, 9 training
caractéristiques perceptives de l’accent d’emphase produit par         videos and a written manual to achieve acceptable levels of
des enfants TSA. Un premier test d’identification de l’accent          reliability. Both raters were blind to session ID and not involved
d’emphase a été mené. Quinze adultes francophones                      in the RCT. After initial training and coding of 9 videos, both
neurotypiques ont assigné une condition prosodique (neutre ou          raters reached an ICC=0.92, p=.001. On 10 subsequent test
emphase) à des mots produits par des enfants TSA et DT. Les            videos coded by both raters, an ICC=0.95 (p
2018/05/14

potential observation-based     outcome     measure for     ASD      performance when processing and producing simple and
interventions.                                                       complex rhythms, which seems associated with visual perceptual
                                                                     skills (DePape et al., 2012). Previous studies have focused on
24. Comparing intensity ratings of emotions in music and             individuals with high cognitive functioning. Thus, the
faces by adolescents with Autism Spectrum Disorder. Hadas            relationship between rhythm perception, ASD symptomology,
Dahary (McGill University), Shalini Sivathasan (McGill               and cognitive skills remains to be investigated across levels of
University), Eve-Marie Quintin (McGill University)                   functioning. The current study aimed to assess the relationship
                                                                     between ASD symptomology and rhythm perception across
People with autism spectrum disorder (ASD) can demonstrate
                                                                     levels of cognitive functioning. Twenty-seven adolescents
difficulty in processing emotions in faces. Little research has
                                                                     (Mage= 14.57) with ASD and varying levels of cognitive
examined emotion processing using music, a domain where they
                                                                     functioning completed a rhythms perception task. The
show great skill. We assessed perception of musical and facial
                                                                     participants’ performance was significantly better than chance,
expressions of emotions for adolescents with ASD with high and
                                                                     p
2018/05/14

the autism spectrum on perception of cognitive and emotional         However, they were faster in identifying emotions in the SMT
aspects of music perception. Methods: Twenty-three adolescents       compared to the LMT and in identifying happy vs. fearful music-
with ASD with low and high verbal comprehension (WISC-V:             evoked emotions. Further, cognitive skills impacted response
VCI) and visual-spatial (WISC-V: VSI) skills completed               times on the LMT, but not on the SMT. ASD symptomology did
musical rhythm, working memory, and emotion recognition              not impact performance on both tasks. These findings suggest
tasks adapted for this study. Results: Across emotion recognition    that emotion processing and decision making among individuals
tasks, results showed that performance accuracy was not              with ASD and difficulty with verbal and visual-spatial skills may
influenced by participants’ level of intellectual functioning (VCI   be facilitated by shorter exposure to emotions and potentially to
and VSI). However, participants in the low VCI group rated           other types of stimuli, which warrants further research.
music-evoked excerpts as more intense than those in the high
VCI group. Additionally, accuracy on the musical working             28. Intellectual abilities and autistic traits in musical
memory and rhythms tasks were influenced by participants’ VSI        prodigies. Chanel Marion-St-Onge (Université de Montréal),
scores, but not VCI. Conclusions: Results suggest that               Megha Sharda (Université de Montréal), Margot Charignon
adolescents with ASD are able to recognize music-evoked              (Université de Montréal), Isabelle Peretz (Université de
emotions, musical working memory, and rhythm, irrespective of        Montréal)
level of verbal cognitive skills. Findings lend support toward the
                                                                     Musical prodigies are musicians who acquire exceptional
use of targeted, strengths-based music interventions adapted to
                                                                     musical performance levels before they reach adolescence. A
varying cognitive skills within the spectrum.
                                                                     previous study on a group of musical prodigies (n=8) showed
27. Recognition of music-evoked emotions among                       that they have superior IQ and an exceptionally good working
adolescents with autism spectrum disorder: Examining the             memory (WM; Ruthsatz et al., 2014). Ruthsatz et al., also
effect of musical excerpt duration and relationship with             showed that 4 out of 8 musical prodigies were autistic or had
cognitive skills. Tania Palma Fernandes (McGill University),         close relatives who were, which suggest a link between these two
Hadas Dahary (McGill University), Shalini Sivathasan (McGill         phenomena. In the current study, we tested 7 musical prodigies
University), Eve-Marie Quintin (McGill University)                   (3 females, aged 17-32) on a standardized IQ test (WAIS-IV;
                                                                     Wechsler, 2008), which includes working memory tasks, and on
Individuals with autism spectrum disorders (ASD) show                a visual WM task (from MEM-III; Wechsler, 2001). We also
impairments recognizing emotions conveyed in facial                  measured autistic traits of our participants with the Autism
expressions and speech, but can recognize music-evoked               Spectrum Quotient (AQ; Baron-Cohen et al., 2001). Prodigies
emotions comparably to individuals with typical development,         had full-scale IQ scores in the normal range (Mean±1SD =
suggesting a strength within the musical domain. Music is a          100±15; M = 108.5 ± 14.0). Verbal and visual WM scores were
dynamic stimuli, which can convey different emotions over the        also in the normal range (Mean±1SD = 10±3; Scaled scores: M
course of minutes and even seconds. We examined the effects of       = 11.85±3.84; M = 12.14±2.79). The autistic traits of participants
musical excerpt exposure duration, cognitive skills (verbal and      were variable ((Mean±1SD = 10±3; Scaled scores :
visual spatial), and ASD symptomology on the performance             M=18.4±7.8), but below what is considered to be typical in
(accuracy and reaction time) of adolescents with ASD on music-       individuals with Autism Spectrum Disorder (35+). Our
evoked emotion recognition tasks. Twenty-one adolescents with        preliminary results indicate that an exceptional IQ or WM may
ASD identified emotions (happy, sad, or fearful) in two music-       not be essential for being a musical prodigy. These results also
evoked emotion tasks: the long music task (LMT; mean duration        challenge the idea that musical talent may be linked to autism.
of 37 seconds/excerpt) and the short music task (SMT; mean           Data from 5 more prodigies will be added in the poster
duration of 4 seconds/excerpt). Participants completed the           presentation.
Verbal Comprehension Index (VCI) and Visual Spatial Index
(VSI) of the WISC-V and their teachers completed the Social
Responsiveness Scale (SRS-2). Results indicated that
adolescents with ASD can accurately recognize music-evoked
emotions irrespective of musical excerpt exposure duration.

Journée scientifique
Scientific Day                                                                                                                     12
2018/05/14

29. La cognition des musiciens : et si l’entrainement musical         in a memory experiment. During encoding, 24 participants (13
influençait la manière dont on se repérait dans l’espace?             female) listened to 6 sentences with fearful prosody produced by
Caroll-Ann Blanchette (Université de Montréal), Greg L. West          different speakers (3 male) twice (Fear group). The other
(Université de Montréal), Nathalie Gosselin (Université de            participants listened to the same stimuli, but expressed in a
Montréal)                                                             neutral tone (Neutral group). During recognition, immediately
                                                                      after encoding, subjects heard 4 sentences produced by each of
Diverses études indiquent que les musiciens possèdent une             the old speakers: same-emotion/same-sentence, same-
meilleure mémoire que les non musiciens. En appui avec ce             emotion/different-sentence, different-emotion/same-sentence,
constat, une augmentation de matière grise (MG) ou d’activité         and       different-emotion/different-sentence.     Twenty-four
fonctionnelle au niveau hippocampique est associée à un               sentences (half fearful) produced by 6 novel speakers were also
entraînement musical. L’hippocampe est connue pour jouer un           presented. An ANOVA revealed significant effects of emotion
rôle clé dans divers types de mémoire, incluant la mémoire            (F(1,42)=58.13, p
2018/05/14

difference in processing music between musicians and non-             training facilitates synchronization around spontaneous
musicians that was not present for voice. Moreover, only              production rates. Frontiers in Psychology, 9, 458.
musicians showed music-specific adaptation effects, with the          doi:10.3389/fpsyg.2018.00458].
bilateral amygdala, thalamus, hippocampus and superior
temporal gyrus (STG) contributing the most to this effect. In         33. Comparing and contrasting the neural mechanisms of
addition, only musicians showed a distinction in processing fear      autobiographical memory and problem solving. Sarah Peters
from neutral music, with the greatest contributions coming from       (McGill University), Signy Sheldon (McGill University)
the bilateral STG, thalamus, Heschl’s gyrus, and left amygdala.
                                                                      This study examined whether neural overlap between
These findings provide strong support for a role of expertise in
                                                                      autobiographical memory and goal-directed problem solving is
the processing of musical emotions. Moreover, they demonstrate
                                                                      determined by how information is retrieved. In an MRI scanner,
the advantage of using high-resolution fMRI and combining
                                                                      young adults performed two retrieval tasks. To autobiographical
adaptation paradigms with multivariate analytical approaches.
                                                                      memory and problem solving cues, they first produced multiple
32. Tapping Into Rate Flexibility: Musical Training                   memory/solution exemplars (generation) then thought about one
Facilitates Synchronization    Around      Spontaneous                instance in detail (elaboration). Multivariate analysis showed
Production Rates. Rebecca Scheurich (McGill University),              that neural overlap was strongest between retrieval forms.
Anna Zamm (McGill University), Caroline Palmer (McGill                Generation was associated with anterior brain activity, evident
University)                                                           within the hippocampus, and elaboration with posterior brain
                                                                      activity. These results provide insights into how retrieval
The ability to flexibly adapt one’s behavior is critical for social   processes are organized in the brain and influence higher
tasks such as speech and music performance, in which                  cognitive functions.
individuals must coordinate the timing of their actions with
others. Natural movement frequencies, also called spontaneous         34. From the brain to the hand: electrophysiological
rates, constrain synchronization accuracy between partners            correlates of language-induced motor activity. Fernanda
during duet music performance, whereas musical training               Perez-Gay Juárez (McGill University, Université du Québec à
enhances synchronization accuracy. We investigated the                Montréal), David Labrecque (Université du Québec à Montréal),
combined influences of these factors on the flexibility with          Victor Frak (Université du Québec à Montréal)
which individuals can synchronize their actions with sequences
                                                                      The link between language and motor systems has been the focus
at different rates. First, we developed a novel musical task
                                                                      of increasing interest to Cognitive Neuroscience. Some classical
capable of measuring spontaneous rates in both musicians and
                                                                      papers studying Event Related Potentials (ERPs) induced by
non-musicians in which participants tapped the rhythm of a
                                                                      linguistic stimuli studied electrophysiological activity of Frontal
familiar melody while hearing the corresponding melody tones.
                                                                      and Central electrodes around 200 ms when comparing action
The novel task was validated by similar measures of spontaneous
                                                                      and non-action words, finging a bigger p200 for action words.
rates generated by piano performance and by the tapping task
                                                                      However, the degree to which these correlates are related to
from the same pianists. We then implemented the novel task with
                                                                      motor activity has not yet been directly measured. On the other
musicians and non-musicians as they synchronized tapping of a
                                                                      hand, a series of studies have validated the use of a grip force
familiar melody with a metronome at their spontaneous rates,
                                                                      sensor (GFS) to measure language-induced motor activity during
and at rates proportionally slower and faster than their
                                                                      both isolated words and sentence listening, finding that action
spontaneous rates. Musicians synchronized more flexibly across
                                                                      words induce an augmentation in the grip force around 300 ms
rates than non-musicians, indicated by greater synchronization
                                                                      after the onset of the stimulus. The purpose of the present study
accuracy. Additionally, musicians showed greater engagement
                                                                      is to combine both techniques to assess if the p200 is related to
of error correction mechanisms than non-musicians. Finally,
                                                                      the augmentation of the grip force measured by a GFS. We ran a
differences in flexibility were characterized by more recurrent
                                                                      first experiment in a group of 17 healthy subjects to verify if
(repetitive) and patterned synchronization in non-musicians,
                                                                      listening to action and non-action words while maintaining an
indicative of greater temporal rigidity. [Scheurich, R., Zamm, A.,
                                                                      active grasping task could induce the p200 differences observed
& Palmer, C. (2018). Tapping into rate flexibility: Musical

Journée scientifique
Scientific Day                                                                                                                       14
You can also read