Poster Session

Thursday, 29th of September

  1. The impact of music on learning and consolidation of novel words.

    Thursday, 29th of September

    The impact of music on learning and consolidation of novel words

     

    Victoria J. Williamson1 and Jakke Tamminen2

    1. Department of Music, University of Sheffield, UK

    2. Department of Psychology, Royal Holloway, University of London, UK

     

    Music can be a powerful mnemonic device, as shown by a body of literature demonstrating that listening to text sung to a familiar melody results in better memory for the words compared to conditions where they are spoken. Furthermore, patients with a range of memory impairments appear to be able to form new declarative memories when they are encoded in the form of lyrics in a song, while unable to remember similar materials after hearing them in the spoken modality. Whether music facilitates the acquisition of completely new information, such as new vocabulary, remains unknown. Here we report three experiments in which adult participants learned novel words in the spoken or sung modality. While we found no benefit of musical presentation on free recall or recognition memory of novel words, novel words learned in the sung modality were more strongly integrated in the mental lexicon compared to words learned in the spoken modality. This advantage for the sung words was only present when the training melody was familiar. The impact of musical presentation on learning therefore appears to extend beyond episodic memory and can be reflected in the emergence and properties of new lexical representations.

    Victoria J. Williamson & Jakke Tamminen
  2. Effect of vocabulary skills on visual contextual priming in 24-month-olds: ERP evidence.

    Thursday, 29th of September

    Effect of vocabulary skills on visual contextual priming in 24-month-olds: ERP evidence

     

    Andrea Helo1,2, Najla Azaiez1, and Pia Rämä1

     

    1. LPP, Université Paris Descartes, France

    2. University of Chile, Santiago, Chile

     

    We examined whether visual contextual information affects word processing in 24-months-olds. Children were presented with visual scene primes (e.g., kitchen) following by a spoken object name that either was consistent (e.g., spoon) or inconsistent (e.g., bed) with the previous scene context. Event-related potentials were recorded in response to the target words. We expected that the words presented in an inconsistent context elicit a more pronounced N400-like component. These results would suggest children have acquired knowledge about visual semantic regularities, and they are capable of integrating semantic conceptual information from visual context to object names. Thirty-one 24-month-old children participated to the study. The results showed that words that were inconsistent with the scene context exhibited larger N400-like component both in normal-to-low and normal-to-high producers. However, language groups exhibited different timing and distribution of N400 component. In low producers, the N400 effect was found over the right frontal sites while in high producers, over the left frontal sites. The component appeared earlier in high than in low producers. The results indicate that children are able of integrating context-related information from scenes to linguistic input but distinct neural resources are activated in contextual scene-word priming depending on linguistic skills.

    Andrea Helo, Najla Azaiez, & Pia Rämä
  3. The left, the better: white-matter brain integrity predicts foreign-language imitation.

    Thursday, 29th of September

    The left, the better: white-matter brain integrity predicts foreign-language imitation

    Lucía Vaquero1, Antoni Rodríguez-Fornells1,2, and Susanne Reiterer3

     

    1. Universitat de Barcelona, Spain

    2. IDIBELL & ICREA

    3. University of Vienna

     

    Speech imitation is crucial for language acquisition and second-language learning. Interestingly, great individual differences regarding the ability in imitating foreign-language sounds have been observed. The cause of this inter–individual difference remains unknown, although it might be explained in part by structural predispositions. We correlated white-matter structural properties of the arcuate fasciculus (AF) with the performance of 52 German-speakers in a Hindi sentence- and word-imitation task. First, a deterministic reconstruction was performed, permitting us to extract the mean values along the three branches of the AF. We found that a larger lateralization of the AF volume towards the left hemisphere predicted the performance of our participants in the imitation task. Secondly, an automatic reconstruction was carried out, allowing us to localize the specific region within the AF that exhibited the largest correlation with foreign-language imitation. Results of this reconstruction also showed a left lateralization trend: greater FA values in the anterior half of the left AF correlated with the performance in the Hindi-imitation task. From the best of our knowledge, this is the first time that foreign language imitation aptitude is tested using a more ecological imitation task and correlated with DTI-tractography, using both a manual and an automatic method.

    Lucía Vaquero, Antoni Rodríguez-Fornells, & Susanne Reiterer
  4. Stimulus familiarity boosts rule abstraction: insights for comparative experiments on pattern perception

    Thursday, 29th of September

    Stimulus familiarity boosts rule abstraction: insights for comparative experiments on pattern perception

     

    Andrea Ravignani, and Piera Filippi

    Artificial Intelligence Lab, Vrije Universiteit Brussel

     

    Pattern perception is central in animal communication, including human language. Although much research has investigated this ability across multiple species, the effects of stimuli i) audibility ii) perceptual conspicuousness, and iii) familiarity on the pattern processing for the species at test have often been neglected. These are key methodological aspects to address within comparative experiments on pattern perception across animal species that largely diverge in bio-cognitive apparatuses and ecological habitat. Here we find that sensory familiarity with stimuli affects the degree of cognitive abstraction in pattern learning experiments. When test stimuli are familiar, humans perform above chance in both lower abstraction tests (generalization of an ABnA rule over different elements within A and B categories) and higher abstraction tests (generalization of the ABnA rule over A and B categories). However, when the same structural rule is instantiated over unfamiliar, although clearly perceivable, sounds, humans fail in the high abstraction test, while still succeeding in the lower abstraction test. These findings are crucial to improve comparative research on category, syntax, phonology and concept learning, as well as on analogical reasoning across animal species.

    Andrea Ravignani, & Piera Filippi
  5. The effect of word position and prosody in a word learning task: a study on school-age children

    Thursday, 29th of September

    The effect of word position and prosody in a word learning task: a study on school-age children

     

    Piera Filippi 1, 2 and Sabine Laaha 2

     

    1. Vrije Universiteit Brussel

    2. University of Vienna

     

    In this study, we investigated how word position and pitch enhancement favor language learning in school-age children. 8-9 year-old participants (n = 56) viewed photographs belonging to one of three semantic categories while hearing an utterance containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word-meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In condition 2, the target word was varied systematically within each utterance across trials, and was sounded at a pitch interval typical of infant-directed speech (IDS). In condition 3, the target word always occurred at the end of the utterance, and was sounded at the same fundamental frequency as all the other words of the utterance. In condition 4, the target word always occurred at the end of the utterance, and was sounded at a pitch interval typical of IDS. We found that learning performance was higher than that observed with simple co-occurrence only for condition 4. We conclude that, for school-age children, the combination of recency effects and pitch enhancement facilitates word learning.

    Piera Filippi & Sabine Laaha
  6. Evaluating the influence of language on the vertical representation of auditory pitch and loudness

    Thursday, 29th of September

    Evaluating the influence of language on the vertical representation of auditory pitch and loudness

     

    Irune Fernandez-Prieto1,2, Charles Spence2, Ferran Pons3 and Jordi Navarra1

     

    1. Fundació Sant Joan de Déu and Parc Sanitari Sant Joan de Déu, Barcelona

    2. Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford

    3. Departament of Cognition Development and Educational Psychology. University of Barcelona

     

    Sounds that are high in pitch and loud in intensity are associated to upper spatial positions. The opposite appears to be true for low and quiet sounds and lower positions in space. In English, the words "high" and "low" define pitch, loudness and spatial elevation. In contrast, in Spanish and Catalan, the words "agudo/agut" and "grave/greu" are used to define high and low pitch, respectively. The words "alto/alt" or "bajo/baix" are principally associated to loudness and spatial elevation. In order to understand the influence that language might have on crossmodal associations, we conducted a study involving native speakers of English and Spanish/Catalan. The participants' task consisted on judging whether a tone was higher or lower (Experiment 1), or more or less intense (Experiment 2) than a reference tone by pressing one of two different buttons that were physically located in a upper or lower position in space. While all of the participants showed clear congruency effects between pitch or loudness and spatial elevation, English speakers showed significantly more robust congruency effects than Spanish/Catalan speakers between pitch and spatial elevation (e.g. a higher pitch and the top button). According to these results, crossmodal associations can be modulated by lexical labels.

    Irune Fernandez-Prieto, Charles Spence, Ferran Pons, & Jordi Navarra
  7. Children with hearing impairment: conversational temporal skills and rhythmic training

    Thursday, 29th of September

    Children with hearing impairment: conversational temporal skills and rhythmic training

     

    Céline Hidalgo1, 2, Simone Falk3, Noël Nguyen1, and Daniele Schön2

     

    1. Laboratoire Parole et Langage, Aix-Marseille University, France

    2. Institut de Neurosciences des Systèmes, Aix-Marseille University, France

    3. Institut de Philologie allemande, Ludwig-Maximilians-Universitaet, Munich, Allemagne

     

    Children with Hearing Impairment (HI) educated in an oral environment display conversational difficulties in spite of good results at standard language assessments. In two studies, we test the hypothesis that these difficulties could be due to an alteration of temporal skills and predictive coding. More precisely, we hypothesize that 30 minutes of active musical rhythmic training will improve the accuracy of conversational turns. To this end, we designed a task wherein the child has to name pictures in alternation with a virtual partner. In a first study, we manipulated the speed and regularity of the turns and measured the effect of rhythmic training on the accuracy and regularity of children’s responses. Results show that the rhythmic training improves the sensitivity of children with HI to the temporal variations of the alternation. In a second study, we manipulated the speech rate of the virtual partner and also measured EEG. We will present the analyses bridging the neural sensitivity to perceptual deviance (here a MMN to temporally deviant trials), the ability to converge to different speech rates and the accuracy in taking the turns. Finally we will show to what extent a short rhythmic training can influence these different skills.

    Céline Hidalgo, Simone Falk, Noël Nguyen, & Daniele Schön
  8. The early origins of the consonant bias in word recognition: Spanish monolingual and Spanish-Catalan bilingual infants

    Thursday, 29th of September

    The early origins of the consonant bias in word recognition: Spanish monolingual and Spanish-Catalan bilingual infants

    Camillia Bouchon1, Camille Frey1, Nuria Sebastián-Gallés1,2, and Juan M. Toro1,2

     

    1. Universitat Pompeu Fabra, Center for Brain and Cognition, Spain

    2. ICREA

     

    Consonants carry more lexical information than vowels and adults rely more on consonants than vowels in lexical tasks in many languages. Infants exhibit this consonant bias more or less early in lexical development depending on their native input (French and Italian: 8-12 months; English: 30 months). These crosslinguistic variations remain unexplained.
    The impact of consonant vs. vowel mispronunciations on word recognition in Spanish during the first year will be compared in Spanish monolinguals and Spanish-Catalan bilinguals, and the influence of two differing characteristics of their input will be explored. If the C/V ratio in the phonetic system contributes more to the emergence of the consonant bias, it should occur earlier in monolinguals (exposed to a very simple vowel system in their input). If the relative C/V weight for lexical identification contributes more, it should occur earlier in bilinguals (2/3 of Spanish-Catalan cognates differ principally on vowels in their input).

    Preliminary results show a vowel bias in both groups at 4 months as French infants, and a consonant bias only in bilinguals at 8 months, suggesting that the C/V importance at the lexical level has more influence than the phonetic system for the emergence of the consonant bias.

    Camillia Bouchon, Camille Frey, Nuria Sebastián-Gallés, & Juan M. Toro
  9. The effect of eye contact on the retention of information

    Thursday, 29th of September

    The effect of eye contact on the retention of information

    Cristina Galusca1, Alveno Vitale1, Luca L. Bonatti1,2

     

    1. Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain

    2. ICREA

     

    Learning information generalizable to kinds relies highly on the presence of ostensive-referential cues used by teachers to direct novices´ attention to the relevant aspects of their message. Also, the type of information infants attend to depends on the presence of ostensive-communicative signals.

    Here, we present a series of six experiments aimed at identifying the kind of information for which ostensive signals are particularly relevant in adult participants (N: 188, aged 18 to 35 years). We isolated a simple ostensive cue, eye contact, and evaluated how adults are influenced by its presence when they are scantly exposed to information of different kinds, ranging from digit span, word and nonword span to complex knowledge such as names or generic/specific facts about novel objects.

    We found no effect of eye contact on the low-level tasks (digit span, word and nonword span). By contrast, eye contact had an impact on the retention of facts. One week after one single exposure to a movie in which the actress made or did not make eye contact with the participants, specific facts were better remembered when presented ostensively. We suggest that in adults, ostensive cues may consolidate the memory traces of episodic facts even after a brief encounter with a novel fact. Because of its selectivity to particular kinds of information, this effect cannot be explained by a simple increase in attention. Instead, it appears that ostensive cues modify the relevance of otherwise meaningless episodic information.

    Cristina Galusca, Alveno Vitale, & Luca L. Bonatti
  10. How social-reward hormones modulate language learning

    Thursday, 29th of September

    How social-reward hormones modulate language learning

     

    Constantina Theofanopoulou

    Universitat de Barcelona, Spain

     

    A growing amount of evidence supports the decisive role of several hormones in the motivational circuits that underlie language learning. Independent studies have highlighted the importance of three hormones in our social reward system: oxytocin, dopamine and serotonin. The aim of this study is firstly, to construct a synthetic framework of the brain circuits where oxytocin, dopamine and serotonin interactions mediate motivation (at the level of connectivity and brain rhythms), based mostly on animal studies (mice, prairie voles, songbirds), and secondly, to show that this framework may also account for the human reward system subserving language learning. For this second goal, we relied on a Pubmed search of studies pertinent to the localization of these hormones in the human brain and the effects they exert at a behavioral level, and backed up the information we found searching in the Allen Brain Atlas for information concerning which brain areas the genes of these hormones and their receptors and transporters are expressed. For brain rhythms’ concerns, we focused on experimental results pointing towards a modulatory role of these hormones upon slow waves, which are thought to be critical for memory consolidation

    Constantina Theofanopoulou
  11. Communication profile in persons with Angelman Syndrome

    Thursday, 29th of September

    Communication profile in persons with Angelman Syndrome

     

    Karla Guerrero Leiva and Carme Brun i Gasca

     

    Universitat Autònoma de Barcelona, Spain

     

    Angelman syndrome (AS) is a severe neurodevelopmental disorder, the estimated prevalence its 1 in 20.000 – 30.000 newborns.  It is caused by the lack of expression of maternally inherited imprinted genes of chromosome 15q11-q13.  This syndrome has a characteristic phenotype including severe intellectual disability, severe speech impairment, epilepsy, happy appearance, excessive laughter, easily excitable personality, hyperactivity and fascination with water.

    The aim of this study is to explore the levels of language and communication in 60 individuals with AS, aged between 3-56 years of different countries, using the MacArthur –Communicative Developmental Inventory in collaboration with the associations of persons with Angelman Syndrome from Spain, Argentina and Portugal.

    The results show specific communication and language characteristics in persons with AS in different areas. The differences by the genetic cause are: persons with AS deletion perform worse than the other groups. Significant differences between countries but not by the education level of the caregiver are found. This study provides more knowledge about communication in person with AS that could lead to an improvement in speech therapy intervention.

    Karla Guerrero Leiva & Carme Brun i Gasca
  12. When Alice in Wonderland has an accent: The effects of accented speech on attentional networks

    Thursday, 29th of September

    When Alice in Wonderland has an accent: The effects of accented speech on attentional networks

     

    Mireia Hernández1, Noelia Ventura-Campos2, Albert Costa1, 3, Anna Miró-Padilla4 and César Ávila4

     

    1. Center for Brain and Cognition. Universitat Pompeu Fabra. Barcelona. Spain.

    2. Department of Mathematics Teaching. Faculty of Teacher Training. Universitat de València. València. Spain

    3. ICREA

    4. Neuropsychology and Functional Imaging Group. Universitat Jaume I, Castellón, Spain

     

    Prior fMRI studies have shown that neural activity in regions that process the acoustic-phonetic signal is affected by accented speech (e.g. Bestelmeyer et al. 2015). However, the effects of accented-speech processing beyond the perceptual level remain unclear. Using Independent Component Analysis, in the present fMRI study we investigated how our attentional system deals with dialectal-accented messages. To this aim, we used stimuli that are close to daily basis scenarios of speech perception: movie watch. In the scanner, 30 natives of Standard Spanish (that of Madrid) watched scenes from Alice in Wonderland (Burton, 2010). Scenes were presented in three different dubbing conditions: (a) UNACCENTED: participants’ native Spanish dialect (Standard Spanish); (b) ACCENTED: a different Spanish dialect (Mexican Spanish); and (c) UNKNOWN LANGUAGE (Dutch) by way of baseline. Relative to unaccented speech, accented speech perception required greater neural resources to evaluate whether the acoustic-phonetic stimuli matched the native templates (based on Cerebellum-Putamen Network). This drove a preference of the Caudate-Thalamus Network for native-like articulatory processing. More importantly, processing accented dialogs was attentionally more demanding: it recruited attentional networks (the Dorsal Attentional Network, and the Salience Network) more strongly, and allowed less mind-wondering (based on the Precuneus Network).

    Mireia Hernández, Noelia Ventura-Campos, Albert Costa, Anna Miró-Padilla & César Ávila
  13. Linguistic background affects bilingual children’s attention to the mouth of a talking person

    Thursday, 29th of September

    Linguistic background affects bilingual children’s attention to the mouth of a talking person

     

    Joan Birulés, Laura Bosch, and Ferran Pons

    Universitat de Barcelona

     

    A recent study indicates that Spanish-Catalan bilingual infants shift their attention to the mouth both earlier (4mo) and for a longer period of their development (12mo) than their monolingual counterparts (Pons et al., 2015). The current study explored whether the preference for the mouth in bilingual infants would extend to older ages (4-5 year-olds), and also whether this attention pattern could be associated to bilingualism by itself or rather associated only to closely-related languages bilingualism. For this purpose, we tested twenty 4- to 5-year-old Spanish-Catalan and Spanish-Russian bilingual children. They watched a female speaking Spanish (native) and English (L3) while we recorded eye-gaze with an eye tracker. Results revealed that groups differed in their eyes-mouth attention pattern, with the former group (Spanish-Catalan) showing a stronger preference for the mouth in both language conditions, a preference absent in the latter group (Spanish-Russian). Hence, we show that the mouth’s redundant speech cues continue to capture bilingual children’s attention, but only when they have been exposed to and have simultaneously acquired a pair of closely related languages, more constantly needing to be disambiguated. These results might have further implications in the way bilinguals are categorized, and on the effects that language proximity can have on their processing mechanisms.

    Joan Birulés, Laura Bosch, & Ferran Pons
  14. Top-down effects of meter induction on audition and vision

    Thursday, 29th of September

    Top-down effects of meter induction on audition and vision

     

    Alexandre Celma Miralles1 , Robert Frank de Menezes2, and Juan M. Toro1,3

     

    1. Center for Brain and Cognition, Universitat Pompeu Fabra, Spain

    2. Universitat de Barcelona, Spain

    3. ICREA

     

    This study focuses on meter induction, the ability to organize the isochronous beats perceived in music in hierarchical structures. Since top-down effects of meter induction have recently been demonstrated in the auditory domain, we aim to assess their presence in the visual modality. Sixteen musicians were asked to mentally project binary (i.e. a strong weak pattern) and ternary (i.e. a strong-weak-weak pattern) meter onto analogous visual and auditory stimuli presented separately. Participants’ electrophysiological responses were recorded during the presentation of sequences of tones and blinking circular shapes at 2.4 Hz. The elicited steady-state evoked potentials were analyzed in the frequency domain, which allowed us to compare the frequencies of the beat (2.4 Hz), its first harmonic (4.8 Hz), the binary subharmonic (1.2 Hz), and ternary subharmonic (0.8 Hz) within and across modalities. We firstly checked the magnitude spectra and found a significant effect at 0.8 Hz in the ternary condition for both modalities. This implies cross-modal meter induction. An interaction between magnitude and modality was also attested for 2.4 and 4.8 Hz. After using the control condition as a baseline, the power spectra revealed significant differences from zero for both modalities in the ternary condition at 0.8 Hz, as well as for the auditory binary condition at 1.2 Hz. These findings supports the idea that the processing of meter can be modulated by top-down mechanisms that interact with our perception of rhythmic events. They also suggest that such modulation is not domain-specific, but can also apply to the visual domain.

    Alexandre Celma Miralles, Robert Frank de Menezes, & Juan M. Toro
  15. The role of memory consolidation in learning and generalising inflectional morphology: behavioural and fMRI findings

    Thursday, 29th of September

    The role of memory consolidation in learning and generalising inflectional morphology: behavioural and fMRI findings

    Lydia Vinals1,2, Jelena Mirković3,4, Gareth Gaskell3, and Matt Davis1

     

    1. Cognition and Brain Sciences Unit, Cambridge, UK

    2. Department of Theoretical and Applied Linguistics, University Of Cambridge

    3. University of York, York, UK

    4. York St John University, York, UK

     

    Language learning and generalisation are tuned to input statistics. In two experiments, we explored the role of overnight memory consolidation in learning and generalising novel inflectional affixes trained with different type and token frequencies. We used an artificial language to train participants on two sets of plural affixes, distinguished by grammatical gender, on two successive days. Within each set, a subset of words contained an ambiguous phonological cue (e.g. arb) which was associated both with a high type frequency regular affix (e.g. farbaff[fem,plur], tarbopp[masc,plur] but also gleetaff[fem,plur], shilnopp[masc,plur], etc.) and a high token frequency irregular affix (e.g. varbesh[fem,plur], yarbull[masc,plur]). In Experiment 1, productive generalisations to untrained phonologically ambiguous singulars (e.g. zarbi[fem,sing], zarbu[masc,sing]) showed greater influence of token frequency for affixes trained on the previous day than for affixes trained on the same day. In Experiment 2, we observed overnight changes in hippocampal and neocortical responses to high type and high token frequency affixes trained in the context of an ambiguous phonological cue. These results suggest a role for overnight memory consolidation in the extraction of frequency statistics underlying inflectional morphology. We discuss these findings with reference to a Complementary Learning Systems account of learning and memory.

    Lydia Vinals, Jelena Mirković, Gareth Gaskell, & Matt Davis
  16. Perception of acoustic stress patterns across species: humans, budgerigars, and rats

    Thursday, 29th of September

    Perception of acoustic stress patterns across species: humans, budgerigars, and rats

     

    Marisa Hoeschele1 and Juan M. Toro 2, 3

     

    1. University of Vienna, Austria

    2. Center for Brain and Cognition, Universitat Pompeu Fabra, Spain

    3. ICREA

     

    The ability to perceive lexical stress, the apparent “strength” of some syllables relative to others, is important because it can help a listener segment speech and distinguish the meaning of words and sentences. Very little is known, however, whether these abilities are human specific, or whether we can find them in other species. We used a go/nogo operant paradigm to compare humans to budgerigars (Melopsittacus undulatus) and rats (Rattus norvegicus) in their ability to distinguish trochaic (stress-initial) from iambic (stress-final) nonsense words. We chose budgerigars as a comparison because they are vocal learners, like humans, and we chose rats because they are more closely-related to humans, but are not vocal learners. Once the three species learned the task, we presented novel words and also words that had certain cues removed (e.g., pitch) to determine which cues were most important in stress perception. All three species learned the task and generalized the discrimination to nonsense words they had never heard before. However, when some cues of lexical stress were removed, humans were the least impaired, followed by budgerigars, and rats were no longer able to solve the task. This suggests that vocal learning may be relevant for processing prosodic information.

    Marisa Hoeschele & Juan M. Toro
  17. Temporal predictability in speech: Comparing statistical approaches on 18 world languages

    Thursday, 29th of September

    Temporal predictability in speech: Comparing statistical approaches on 18 world languages

     

    Yannick Jadoul, Andrea Ravignani, Bill Thompson, Piera Filippi, and Bart de Boer

     

    Artificial Intelligence Lab, Vrije Universiteit Brussel

     

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are often thought to scaffold early acquisition of the building blocks in speech: by providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Here, we test whether syllable occurrence is predictable over time. Existing measures of speech timing: (i) tend to focus on first-order regularities among adjacent units, and (ii) are overly sensitive to idiosyncrasies in the data they describe. Instead we pursue a two-pronged strategy to quantify predictability in a sample of 18 languages, integrating several statistical methods. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a maximally simple distributional measure that nevertheless correlates with the common, more complex measure nPVI. Second, unlike previous approaches, we model higher-order temporal structure – regularities that arise in an ordered series of syllable timings – testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together our analyses provide weak evidence for predictability at different time scales, though it is difficult to reliably infer predictability at higher-orders. We conclude that any temporal predictability in speech may arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint with confidence at any particular locus.

    Yannick Jadoul, Andrea Ravignani, Bill Thompson, Piera Filippi, & Bart de Boer
  18. There is more to fast-mapping than meets the eye

    Thursday, 29th of September

    There is more to fast-mapping than meets the eye

     

    Cristina Galusca1, Martín Guida Fórneas 1, and Luca L. Bonatti 1,2

    1 Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain

    2 ICREA

    The current studies investigate the role of one ostensive cue, eye contact, on the long term acquisition of novel names and two types of facts (specific and generic) induced by fast-mapping, in 5-year-old children. During an object-matching game, participants were incidentally presented with novel names and facts associated to some of the novel objects. We evaluated the effect of eye contact on the retention of information right after the presentation and at a week interval. The results revealed a better performance when information was presented with eye contact, an effect maintained even a week after only a brief exposure. This seems to suggest that eye contact modifies the relevance of the information presented and thus, improves long-term retention of names and facts. Overall, facts were significantly better retained than names and we found no difference in the performance for specific and generic facts. Interestingly, in the second session, the performance for names was above chance only when the presentation was done making eye contact, which seems to be in agreement with the “natural pedagogy” theory which highlights the importance of ostensive cues in encoding object identity.

    Cristina Galusca, Martín Guida Fórneas, & Luca L. Bonatti
  19. Temporal flexibility to orient attention modulates rule learning in childhood

    Thursday, 29th of September

    Temporal flexibility to orient attention modulates rule learning in childhood

    Anna Martinez-Alvarez 1,2, Pablo Ripolles 1,2, Monica Sanz-Torrent 1, Ferran Pons 1, and Ruth de Diego-Balaguer 1,2,3

    1 Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Spain.

    2 Cognition and Brain Plasticity Unit, IDIBELL (Institut d’Investigació Biomèdica de Bellvitge), Spain,

    3 ICREA (Catalan Institution for Research and Advanced Studies), Barcelona, Spain

     

    While listening to speech, elements in a syntactic dependency do not occur at the same temporal distance since in a non-adjacent dependency (is V-ing) the intermediate element may vary in length (is doing, is remembering, is learning). Hence, in order to learn, we have to be flexible about when to expect the second element of the dependency to appear. We tested the hypothesis that the development of the ability to flexibly orient attention in time may modulate non-adjacent rule learning. To this end, we designed two tasks: a temporal orienting task and a rule-learning task. We tested 92 typically developing children (ranging from 4 to 9 years) and 26 adults. Our results reveal that, irrespective of age, individual differences in temporal attention flexibility appear to be an important factor modulating language performance in childhood. In a second study children with Specific Language Impairment (SLI) were tested. We found that, in the rule learning task, SLI children without attention deficits performed significantly better than SLI children with attention deficits. These results suggest that attention deficits in SLI –but not language impairment per se- may have an impact on rule learning. Taken together, our findings suggest that children recruit attentional mechanisms in order to correctly orient attention in time to extract non-adjacent dependencies in language.

    Anna Martinez-Alvarez, Pablo Ripolles, Monica Sanz-Torrent, Ferran Pons, & Ruth de Diego-Balaguer

Friday, 30th of September

  1. Characterizing the species-specific developmental trajectory underlying our enhanced learning capacity.

    Friday, 30th of September

    Characterizing the species-specific developmental trajectory underlying our enhanced learning capacity

     

    Cedric Boeckx1,2, Constantina Theofanopoulou2, and Saleh Alamri2

     

    1. ICREA

    2. Universitat de Barcelona, Spain

     

    Phylogenetic studies like Hublin et al. (2015) strongly support the idea that H. sapiens follows a species-specific brain growth trajectory that departs from its closest extent and extinct relatives during the first year of life, at a time critical for language acquisition (Friedmann and Rusou 2015). This differential developmental pattern results in a selective expansion and complexification of several areas including the frontal pole, parietal lobe, and cerebellum. Here we offer evidence from results in genetics and early developmental studies that suggests that this difference stems from early postnatal changes in neurogenesis in the subventricular zone. These changes produce immature neurons displaying specific features of synaptic plasticity enhancing learning capacities, and target specific brain areas, some of which outside the classical 'language centers', but nevertheless important for language (e.g., ventromedial prefrontal cortex). When affected, these specific changes in neurogenesis entail cognitive disorders like autism. We argue that this enhancement is what lies behind the neural basis of our specific language learning instinct, which allows us to move up the vocal learning complexity scale (Petkov and Jarvis 2012), but also gives rise to higher cognitive flexibility more generally (Burghardt et al. 2012).

    Cedric Boeckx, Constantina Theofanopoulou, & Saleh Alamri
  2. Interactions between vocabulary skills and recognition of basic emotion labels in two-years-olds

    Friday, 30th of September

    Interactions between vocabulary skills and recognition of basic emotion labels in two-years-olds

     

    Oytun Aygun, Louise Goyet, and Pia Rämä

    LPP, Université Paris Descartes, France

     

    When children start to categorize emotions, they initially form broad categories that are fine-tuned with experience during the preschool years (Russel and Widen, 2010). It has been suggested that language aids decoding facial expressions through categorisation and mental representations, and it is likely that developing language skills contribute to category fine-tuning. We hypothesized that 24-months-old children with higher expressive vocabulary skills are better at recognizing facial expressions than those who have lower level of language skills. We tested thirty French-learning children in a looking-while-listening task to assess recognition of four basic emotion labels (happy, sad, fear, angry). Children were presented with images of two faces expressing emotions, and after a short preview, one of them was labelled. Looking times to target emotions during pre- and post-naming phases were recorded. Vocabulary skills were measured using a CDI questionnaire. The results showed that children with higher vocabulary skills looked longer at the happy and fearful target face after the naming. Children with lower expressive vocabulary skills did not show a naming effect, that is, they looked equally both the target and the distracter image after naming. Our results suggest that developing vocabulary skills contribute to recognition of emotion labels in young children.

    Oytun Aygun, Louise Goyet, & Pia Rämä
  3. Distinct ERP profiles for learning rules over vowels and consonants

    Friday, 30th of September

    Distinct ERP profiles for learning rules over vowels and consonants

     

    Júlia Monte-Ordoño1 and Juan M. Toro1,2

     

    1. Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain

    2. ICREA

     

    The Consonant-Vowel hypothesis suggests that consonants and vowels carry different information during language learning. Consonants provide more information for lexical access, while vowels carry prosodic information. In this study we explored whether these functional differences triggered different neural responses in an abstract rule learning task. We recorded Event Related Potentials (ERP) while nonsense words were presented in an oddball paradigm. Standard stimuli had an ABB rule, Phoneme Deviants followed the same structure as standards, and Rule Deviants followed an ABA rule. In the Vowel condition, the rules were implemented over the vowels (ABB rule: fufefe; ABA rule: fufefu). In the Consonant condition rules were implemented over the consonants (ABB rule: lomomo; ABA rule: lomolo). The results showed that there was a different ERP distribution for the Consonant and Vowel condition. When the rules were implemented over the vowels a frontal negative component was triggered around 400 ms after the Rule Deviant stimuli. In contrast, in the Consonant condition, we observed a posterior N400 component after the presentation of the Phoneme Deviant stimuli. The results suggest that consonants and vowels have dissociable roles during language processing and add further evidence to the division of labor proposed by Nespor et al. (2005).

    Júlia Monte-Ordoño & Juan M. Toro
  4. Music recursion: Preliminary experiments on human sensitivity to rhythmic structure in a grammar with recursive self-similarity

    Friday, 30th of September

    Music recursion: Preliminary experiments on human sensitivity to rhythmic structure in a grammar with recursive self-similarity

     

    Andreea Geambașu1  Andrea Ravignani2,  and Clara C. Levelt1

     

     

    1. Leiden University Centre for Linguistics - Leiden Institute for Brain and Cognition, Leiden University

    2. Department of Cognitive Biology, University of Vienna - AI Lab, Vrije Universiteit Brussel

     

     

    Processing of hierarchical structures has been proposed as a uniquely human ability, a hallmark of the linguistic system that distinguishes human language from animal communication systems. Recursion is often considered the pinnacle of human-specific hierarchical structures. In Artificial Grammar Learning experiments, human participants can learn the context-free grammar AnBn. Yet, whether acquisition of this grammar can be taken as evidence for processing recursive information at all is debated. Here we take an alternative approach, testing recursion in the musical, rhythmic domain. We present the first rhythm detection experiment using a Lindenmayer grammar, a self-similar recursive grammar previously shown to be learnable using speech stimuli. Participants’ sensitivity to recursive rhythmic structure was tested against different types of foils when given implicit vs. explicit instructions. Preliminary results suggest that (i) at the group level, participants were unable to correctly accept or reject grammatical and ungrammatical strings, although (ii) five (of 40) participants were able to do so when given specific instructions. We contrast our findings with results on human sensitivity to recursion in other domains and modalities, proposing additional experiments to test whether humans are particularly apt at processing recursive structures and, if so, whether this is a domain-general ability.

    Andreea Geambașu, Andrea Ravignani & Clara C. Levelt
  5. Looking at early word segmentation and mapping through pupillometry

    Friday, 30th of September

    Looking at early word segmentation and mapping through pupillometry

     

    Maria Teixidó and Laura Bosch

     

    Universitat de Barcelona

     

    Previous research has shown that 6-month-olds can use prosody to simultaneously extract one word from an artificial language and map it onto a referent (Shukla, White and Aslin, 2011). To further explore this ability using natural speech, 6- and 9-month-old infants were tested with an audiovisual segmentation and mapping task, in which objects moved aligned with prosodically marked words. Visual fixation patterns and pupil dilation measures were recorded. Visual fixation measures yielded significant between-group differences (p=0.02), with only 9-month-olds succeeding at this dual task. An ANOVA using mean pupil size values to words (baseline and learning phase) as a within-group factor, and age (6 and 9 months) as a between-group factor showed a significant interaction (p = 0.01), with only 9-month-olds increasing pupil size during learning. Two additional experiments, with similar material, testing segmentation and mapping separately, confirmed that these abilities are present by 6 months of age. Increases in pupil size were only found in the mapping task, suggesting that pupil dilation might reflect object-label association processing, rather than segmentation. Taken together, results indicate that the dual ability of simultaneously segmenting and mapping two words extracted from a natural language is cognitively too challenging by 6 months of age.

    Maria Teixidó & Laura Bosch
  6. Development of language processing abilities in children with Specific Language Impairment

    Friday, 30th of September

    Development of language processing abilities in children with Specific Language Impairment

     

    Lucia Buil-Legaz, Daniel Adrover-Roig, Raúl López Penadés, Víctor Alejandro Sánchez Azanza, and Eva Aguilar-Mediavilla

     

    Universitat de les Illes Balears

     

    Language development in children with Specific Language Impairment (SLI) is still poorly understood. This study describes the longitudinal trajectory of several measures of language processing abilities in children with SLI relative to control children matched by their age. A set of measures of language processing abilities (non-word repetition, sentence repetition, phonological awareness, rapid automatic naming, and verbal fluency) were collected at three time points, from 6–12 years of age using a prospective longitudinal design. Results revealed that, at all ages, children with SLI obtained lower values in measures involving a high load on phonological working memory (non-word repetition, sentence repetition and phonological awareness without visual cues) when compared to typically developing children. Other measures with a low load on phonological working memory (rapid automatic naming, phonological awareness with visual cues and semantic verbal fluency), improved over time, given that differences at 6 years of age did not persist at further moments of testing. Therefore, results show that children with SLI manifest persistent difficulties in tasks involved in manipulating segments of words and in maintaining verbal units active in phonological working memory, while other abilities, such as the access to underlying phonological representations are less affected.

    Lucia Buil-Legaz, Daniel Adrover-Roig, Raúl López Penadés, Víctor Alejandro Sánchez Azanza, & Eva Aguilar-Mediavilla
  7. How language input shapes word learning? A longitudinal study in young children

    Friday, 30th of September

    How language input shapes word learning? A longitudinal study in young children

     

    Cindy Bellanger1, Jean-Pierre Chevrot2, and Elsa Spinelli1

     

    1. Université Grenoble-Alpes, LPNC, Grenoble, France

    2. Laboratoire Lidilem, Université Stendhal, Grenoble, France

     

    We carried out a 3-month longitudinal study on 27 2-year-old French children to assess the influence of language input on word learning and word segmentation abilities. We manipulated the linguistic input by creating DVD stories including pseudo-nouns designating fictional animals. Half of them were presented with four different determiners (variability condition) and the other half were always presented with the same determiner (non-variability condition). After each month watching daily the stories on DVDs, children were tested with 2 perception tasks testing pseudo-noun recognition and 2 production tasks testing pseudo-noun segmentation and noun-phrase fluency. According to the principles of Universal Grammar (Chomsky, 1953) certain abstract grammatical categories are early available. Noun-phrase utterances are then segmented and determiners and nouns are directly addressed to the corresponding categories and can be reused (Valian, 2014). In contrast, Usage-Based accounts (Tomasello, 2003) expect noun-phrase utterances to be stored as a whole. Determiner-noun segmentation happens later by hearing nouns in various contexts (Pine et al., 2013). Usage-Based accounts thus predict differences in pseudo-nouns segmentation and fluency between the variability and non-variability conditions. Such differences are not expected by Universal Grammar. The data are currently analyzed and results will be available soon.

    Cindy Bellanger, Jean-Pierre Chevrot, & Elsa Spinelli
  8. The impact of beat gestures on L2 acquisition

    Friday, 30th of September

    The impact of beat gestures on L2 acquisition

     

    Olga Kushch1, Daria Gluhareva1, Alfonso Igualada1, Pilar Prieto1,2

     

    1. Universiat Pompeu Fabra

    2. ICREA

     

    Beat gestures are rhythmic hand and arm movements that are typically associated with prominent prosodic positions in speech. Little is known about their potential beneficial effects (in addition to the effects of prosodic prominence) on L2 learning.
    The present study consists of three experiments. Experiment 1 investigates the effects of prosodic prominence (L+H* pitch accent) and visual prominence (beat gestures) on L2 novel vocabulary acquisition. Foreign words were presented under 4 experimental conditions: prominence in: 1) neither speech nor gesture, 2) both speech and gesture, 3) speech but not in gesture, 4) gesture but not in speech. The results showed a positive effect of prosodic and gestural prominence working together on L2 word memorization.
    Experiments 2 and 3 investigate the effect of beat gesture observation and production on pronunciation improvement in a language with different rhythmic properties than one’s own. The results show that beat gesture observation and production improve learners’ accentedness.
    The results of these three experiments demonstrate that beat gestures act as highlighters of prosodic information and represent a useful supportive strategy for foreign language acquisition. Our results are in line with embodied cognition perspectives (e.g. Hu et al., 2015).

    Olga Kushch, Daria Gluhareva, Alfonso Igualada, Pilar Prieto
  9. Traces of Statistical Learning in Functional Connectivity after Artificial Language Exposure

    Friday, 30th of September

    Traces of Statistical Learning in Functional Connectivity after Artificial Language Exposure

     

    Pallabi Sengupta1, Gorka Zamora-López1, Miguel Burgaleta1, Gustavo Deco1,2, Núria Sebastián-Gallés1,2

     

    1. Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain

    2. ICREA

     

    One fundamental step when learning a new language is to segment words from the speech signal. To achieve this, humans rely on Statistical Learning (SL), a domain-general ability that enables the implicit detection of probabilistic regularities in our surrounding environment. The role of brain connectivity on SL has been previously explored, highlighting the relevance of structural and functional connections between frontal, parietal, and temporal cortices. However, whether SL can induce changes in the functional connections of the resting state brain has yet to be investigated. To address this question, we applied a pre-post design where participants (n=38) were submitted to resting-state fMRI acquisition before and after in-scanner exposure to either an artificial language stream (formed by 4 concatenated words) or a random audio stream. We then adapted, for the first time, a technique well used in genetic studies to compare connectivity changes in the active links between the two conditions. Our results showed that exposure to an artificial language stream significantly changed (corrected p < .05) the functional connectivity between Right Superior Parietal Gyrus and
    Left Inferior Parietal Lobule, as well as between Left Middle Frontal Gyrus and Left Inferior Frontal Gyrus, Orbital Part.

    Pallabi Sengupta, Gorka Zamora-López, Miguel Burgaleta, Gustavo Deco, & Núria Sebastián-Gallés
  10. Mutual influences between epistemic intonation and co-speech gesture in online language comprehension

    Friday, 30th of September

    Mutual influences between epistemic intonation and co-speech gesture in online language comprehension

     

    Evangelia Kiagia1, Joan Borrás-Comes1,2  and Pilar Prieto1,3

     

    1. Universitat Pompeu Fabra

    2. Universitat Autònoma de Barcelona

    3. ICREA

     

    While a number of previous studies have proposed that (iconic) gestures and speech interact mutually and obligatorily during online processing (e.g., Kelly et al. 2003; 2010) little is known about the mutual and bidirectional influences between pragmatic prosody and gestures. Previous studies have shown how certain intonation patterns and gestures encode the speaker's commitment to the proposition and the speaker's agreement with its interlocutor (Borrás-Comes & Prieto 2011).

    Experiment 1 presented participants with a set of gesture primes expressing levels of speaker agreement and commitment and asked them to produce a target phrase. Pilot results with 10 participants show that the set of gesture primes elicited different levels of speaker agreement and commitment intonation patterns, confirming a direct influence of gestures on the corresponding intonation patterns. Experiment 2 consisted of an eye tracking visual search experiment in which participants saw images with epistemic gestures while listening to neutral sentences produced with a set of intonation patterns carrying different levels of speakers agreement and commitment. Pilot results from the two experiments demonstrate the bidirectional and obligatory influences between intonation and gesture, and more specifically, that epistemic intonation and gestures form a semantically integrated system in online language comprehension.

    Evangelia Kiagia, Joan Borrás-Comes, & Pilar Prieto
  11. Predicting syllables and silences: an ERP study

    Friday, 30th of September

    Predicting syllables and silences: an ERP study

     

    Vittoria Spinosa and Iria SanMiguel

     

    Institute of Neurosciences and Dep. of Clinical Psychology and Psychobiology, University of Barcelona

     

    The human brain processes self-generated stimuli different than stimuli generated by external sources. Particularly, neural responses to self-generated sounds are attenuated. In humans, the self-generated sound per excellence is language. Here, we investigate the neural mechanisms underlying the differential processing of self-generated sounds, which probably contribute to self-monitoring of speech production. Current theories propose that the brain constructs an internal representation of the external world in order to guide our actions. Using this representation, we generate predictions regarding the sensory consequences of our motor acts. Neural responses to stimuli that match the predictions (e.g. predictable self-generated speech sounds) are attenuated, while error-related responses are elicited when our motor acts have unexpected sensory consequences. In the present study we measured event-related potentials elicited by the auditory presentation of a syllable, self-triggered by the subject pressing either one of two buttons. We manipulated the press-effect contingencies, such that one button predicted the presentation and the other the absence of the sound. We investigate the differences between predicting the presence vs. predicting the absence of a verbal stimulus after a motor act, and the violation of each of these predictions. The results corroborate the attenuation of neural responses to predicted self-generated sounds, and show differences in the error signals elicited by the unexpected presentation or the unexpected omission of the self-generated sound.

    Vittoria Spinosa & Iria SanMiguel
  12. The valence-space metaphor is grounded in embodied experience

    Friday, 30th of September

    The valence-space metaphor is grounded in embodied experience

     

    Emilia Castaño1, Elizabeth Gilboy2, Sara Feijóo1, Elisabet Serrat3, Carles Rostan3, Joseph Hilferty1, and Toni Cunillera2                                                      

     

    1. English Department, Faculty of Philology, University of Barcelona, Spain.

    2. Department of Cognition, Development and Educational Psychology, University of Barcelona, Spain.

    3. Department of Psychology, Faculty of Education and Psychology, University of Girona, Spain.

     

    Conceptual metaphor is ubiquitous in language and thought, as we usually reason and talk about abstract concepts in terms of more concrete ones via metaphorical mappings that are hypothesized to arise from our embodied experience. One pervasive example is the VALENCE IS VERTICALITY metaphor, which maps affective valence onto the vertical axis of space (e.g., GOOD IS UP and BAD IS DOWN). In the current study, we used a conceptual-coherence task to explore whether the semantic processing of valence automatically recruits spatial cognition. We also examined whether the speed and accuracy of valence evaluation varies as a function of word-class stimuli (nouns vs. adjectives) and body posture (namely, hand position). Experiment 1 shows that adjectives, but not nouns, elicited spatial-congruency effects, thus indicating that grammatical category is a crucial factor for the space-valence associations. Experiments 2 and 3 show that the alignment of participants’ body posture with that of the stimuli facilitated the judgment of positive- and negative-valence words, but only when response allocation was congruent with the GOOD IS UP metaphor. Overall, these results are in line with the embodiment thesis, which claims that the understanding of many abstract concepts is grounded in bodily experience.

    Emilia Castaño, Elizabeth Gilboy, Sara Feijóo, Elisabet Serrat, Carles Rostan, Joseph Hilferty, & Toni Cunillera
  13. Beat gestures help preschool children to improve recall and language abilities

    Friday, 30th of September

    Beat gestures help preschool children to improve recall and language abilities

     

    Alfonso Igualada2,3, Núria Esteve-Gibert2,4, Judith Llanes2, Olga Kushch2, Ingrid Vilà2, and Pilar Prieto1,2

     

    1. ICREA

    2. Universitat Pompeu Fabra

    3. Universitat Oberta de Catalunya

    4. Aix Marseille Université, CNRS

     

    Gesture and prosody are important precursors of children’s early language development. However, it is unclear whether gestural and prosodic integration abilities can boost preschooler's memory and linguistic abilities. While researchers have shown that adults can benefit from the presence of beat gestures in word recall tasks, studies have failed to conclusively replicate these findings with pre-school children. This work investigates whether accompanying words with beat gesture and prosodic prominence can help preschoolers improve word recall in lists of words (Experiment 1), whether they might improve memorization and discourse comprehension of contrastively focused words (Experiment 2), and whether a training with observing narratives produced with beat gestures can boost children’s narrative skills (Experiment 3).

    Results from Experiment 1 with one-hundred 3-to-5-year-old children showed that children recalled the target word significantly better when it was accompanied by a beat gesture than when not, indicating a local recall effect. Results from Experiment 2 with fifty-one 4 year-old children also indicate clear effects of observing beat gestures and prosodic prominence on the recall of the target focused items and on discourse comprehension abilities. Finally, results from Experiment 3 with forty-four 5-to-6-year-old children have also shown a positive effect on preschooler’s narrative discourse abilities.

    Alfonso Igualada, Núria Esteve-Gibert, Judith Llanes, Olga Kushch, Ingrid Vilà, & Pilar Prieto
  14. Neural correlates of benefits for sensory consonance

    Friday, 30th of September

    Neural correlates of benefits for sensory consonance

     

    Paola Crespo-Bojorque1, Júlia Monte-Ordoño1, and Juan M. Toro1, 2

     

    1. Center for Brain and Cognition, Universitat Pompeu Fabra

    2. ICREA

     

    Consonant and dissonant musical intervals differ in how pleasant they are perceived and how easily they are processed. Consonant intervals tend to be rated as more pleasant and are more readily processed than dissonant intervals. In the present study, we explore how the brain responds after changes in consonance and dissonance, and how experience modulates these responses. We registered event-related brain potentials (ERP) while participants were presented with sequences of consonant intervals interrupted by a dissonant interval, or sequences of dissonant intervals interrupted by a consonant interval. Participants were musicians or musically naive volunteers. Results showed that changes in a sequence of consonant intervals are easily detected independently of musical expertise, as revealed by a MMN component elicited in both musicians and non-musicians. Changes in a sequence of dissonant intervals elicited a late MMN only in participants with extensive musical training. Even more, a P100 (an ERP component related to unpleasant stimuli) was elicited only in non-musicians when a dissonant sound appeared in a consonant sequence. Our results demonstrate a processing advantage for consonance at the neural level. They also provide support to the idea that experience improves processing of musical intervals and influences the aesthetic perception of sounds.

    Paola Crespo-Bojorque, Júlia Monte-Ordoño, & Juan M. Toro
  15. Social status and learning, how infants trust more on high rank agents

    Friday, 30th of September

    Social status and learning, how infants trust more on high rank agents

     

    Jesús Bas1, Alba Ayneto1, and Núria Sebastián-Gallés1,2

     

    1. Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain

    2. ICREA

     

    When infants receive conflicting information from different adults, they use several cues to determine which one is the most reliable. Here we study how the social status acts as a cue that helps infants to choose relevant informants.

    The study had three parts. First, infants were presented with a video of two female agents competing for the same goal and one of them always prevails (the high rank). The second part consisted in the face of one of the agents appearing in the centre of the screen followed by the sound of an animal (sheep/cat for one agent and cow/cat for the other agent). Then the agent looked at one of the corners of the screen and the corresponding animal appeared (similar to Tummeltshammer et al., 2014). Critically, one animal (the cat) appeared in different locations depending on the agent. In the third phase, only the sounds and the pictures of the animals were presented to test infants’ looking preferences.

    The analysis of the eye movements of 18- and 21-month olds showed that only older infants preferred to look where the high rank agent did. These results confirm that they use information about status to guide their learning.

    Jesús Bas, Alba Ayneto, & Núria Sebastián-Gallés
  16. The influence of syllabic structure in rule learning

    Friday, 30th of September

    The Influence of Syllabic Structure in Rule Learning

     

    Irene Torres1, Juan M. Toro1,2

     

    1. Center for Brain and Cognition, Universitat Pompeu Fabra

    2. ICREA

     

    The syllable is a basic processing unit in speech, used to segment the signal and access the lexicon. Rule learning is a basic mechanism by which we can extract regularities from a speech stream over adjacent or non-adjacent segments as syllables or phonemes. Here we wanted to explore whether our representations of syllabic structure modulate how we extract abstract structures from speech. In a series of experiments, participants (N=17 in each experiment) listened to a stream of trisyllabic non-sense words that followed an ABB rule over syllables (Experiments 1a-4a) or over vowels (Experiments 1b-4b). They were then presented with a two-alternative forced choice (2AFC) generalization test where the syllabic structure was modified (e.g. going from CV during familiarization to either CVC or CCV during test). Results show that subjects generalized the abstract rule in all the experiments. Performance in the experiments where the rule was implemented over syllables was higher than performance in the experiments where the rule was implemented over vowels. However, we did not observe any effect of changes in syllabic structure from familiarization to test. This suggests the syllable was not modulating the extraction of the abstract patterns over the syllables or the vowel segments.

    Irene Torres, & Juan M. Toro
  17. Fronto-parietal connectivity in the extraction of language rules

    Friday, 30th of September

    Fronto-parietal connectivity in the extraction of language rules

    Joan Orpella1,2, and Ruth de Diego-Balaguer1,2,3

     

    1. Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Spain.

    2. Cognition and Brain Plasticity Unit, IDIBELL (Institut d’Investigació Biomèdica de Bellvitge), L'Hospitalet de Llobregat, Spain,

    3. ICREA (Catalan Institution for Research and Advanced Studies), Barcelona, Spain

     

    Recent work has placed rule-learning in the centre-stage of research on language acquisition, yet views tend to remain encapsulated within-domain. An integrated account might nevertheless require considering the implication of other cognitive functions. In particular, and because of the temporal aspect in speech processing, the dynamic orienting of attention in time is likely to be crucial in the acquisition of morphosyntactic rules where sequential order is important. Specifically, attentional processes may aid in the selection of relevant information from the speech stream. Given the functional and anatomical overlap between language and attention-in-time fronto-parietal networks, it was hypothesized that the left arcuate fasciculus’ (AF) anterior segment, connecting Broca’s and Geschwind’s territories, may be critical in facilitating implicit rule acquisition. 23 right-handed native Spanish speakers were MRI-scanned so as to delineate the anterior fronto-parietal, posterior parieto-temporal and long fronto-temporal segments of the AF and extract surrogate measures of their axonal properties. Outside the scanner, participants were exposed to an artificial language with sentences containing AxC-type rules while performing a cover word-monitoring task. RTs to word monitoring provided an indirect measure of online incidental rule-learning performance. A subsequent recognition test was then used to gauge participants' recognition of the dependencies.

    Joan Orpella & Ruth de Diego-Balaguer
  18. Vocabulary acquisition over a 1-week training program, an electrophysiological study

    Friday, 30th of September

    Vocabulary acquisition over a 1-week training program, an electrophysiological study

     

    Neus Ramos-Escobar 1,2, Clément François 1,2,3, Matti Laine 4 and Antoni Rodriguez-Fornells 1,2,5

    1 Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Spain.

    2 Cognition and Brain Plasticity Unit, IDIBELL (Institut d’Investigació Biomèdica de Bellvitge), L'Hospitalet de Llobregat, Spain

    3 Institut de Recerca Pediàtrica Hospital Sant Joan de Déu, Barcelona, Spain.

    4 Department of Psychology, Abo Akademi University, Turku, Finland

    5 Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain

     

    The ability to acquire a new vocabulary frequently occurs in our lives, not only when learning a new language but also when starting a new activity. The centro-parietal N400 component of the event-related brain potentials has been classically associated to semantic-conceptual processes. Nonetheless, recent ERP studies have provided evidence for a fronto-central N400 in novel word learning tasks. Here, we used the Ancient Farming Equipment Paradigm to examine the brain responses of 25 adult participants acquiring a new vocabulary (novel object picture non-word pairs) over five consecutive days. Three memory tasks (overt naming, covert naming and recognition tasks) were administered during each training session and a four months follow-up tested the maintenance of the word to picture associations. During the first and last training sessions EEG was recorded. Interestingly, both behavioural and ERP data showed evidence of learning with correctly learned associations eliciting a larger P2, FN400 and late the positive component during the last learning session than during the first. These results provide further evidence for the involvement of the FN400 component in the early stages of word learning.

    Neus Ramos-Escobar, Clément François, Matti Laine, & Antoni Rodriguez-Fornells