Thursday, November 14, 2019
Palais des Congrès de Montréal
Room 519A
Montréal, Canada

Brief Schedule

8:00 Registration
8:20 Opening and Welcome
8:30 General Auditory Processing Talks
9:30 Break (15 mins)
9:45 Music Perception and Production Talks
11:00 Poster Setup (15 mins)
11:10 Poster Session
101-107 General Auditory Processing
108-114 Multisensory Processing
115-119 Auditory Pathology
120-123 Auditory Development
124-130 Musical Features
131-134 Music Performance
135-145 Speech and Language
146-151 Audition and Emotion
12:40 Lunch Break (50 minutes)
1:30 Keynote Address: Isabelle Peretz (University of Montréal)
2:00 Speech Perception Talks
3:00 Break (15 minutes)
3:15 Auditory Perception Across the Lifespan Talks
4:15-5:00 Business Meeting (APCAM, APCS, and AP&C)

Welcome to APCAM 2019

Welcome to the 18th annual Auditory Perception, Cognition, and Action Meeting (APCAM 2019) at the Palais des Congrès de Montréal (Montréal Convention Centre) in Montréal! Since its founding in 2001-2002, APCAM's mission has been "...to bring together researchers from various theoretical perspectives to present focused research on auditory cognition, perception, and aurally guided action." We believe APCAM to be a unique meeting in containing a mixture of basic and applied research from different theoretical perspectives and processing accounts using numerous types of auditory stimuli (including speech, music, and environmental noises). As noted in previous programs, the fact that APCAM continues to flourish is a testament to the openness of its attendees to considering multiple perspectives, which is a principle characteristic of scientific progress.

As announced at APCAM 2017 in Vancouver, British Columbia, APCAM is now affiliated with the journal Auditory Perception and Cognition (AP&C), which features both traditional and open-access publication options. Presentations at APCAM will be automatically considered for a special issue of AP&C, and in addition, we encourage you to submit your other work on auditory science to AP&C. Feel free to stop at the Taylor & Francis exhibitor booth at the Psychonomic Society meeting for additional information or to see a sample copy.

Also announced at the Vancouver meeting is APCAM's affiliation with the Auditory Perception and Cognition Society (APCS) (https://apcsociety.org). This non-profit foundation is charged with furthering research on all aspects of audition. The $30 registration fee for APCAM provides a one-year membership for APCS, which includes an individual subscription to AP&C and reduced open-access fees for AP&C.

As an affiliate meeting of the 60th Annual Meeting of the Psychonomic Society, APCAM is indebted to the Psychonomic Society for material support regarding the meeting room, AV equipment, and poster displays. We acknowledge and are grateful for their support, and we ask that you pass along your appreciation to the Psychonomic Society for their support of APCAM.

We appreciate all of our colleagues who contributed to this year's program. We thank you for choosing to share your work with us, and we hope you will continue to contribute to APCAM in the future. This year's program features spoken sessions on speech perception, music perception and production, auditory perception across the lifespan, and general auditory processing, as well as a wide variety of poster topics. We are confident that everyone attending APCAM will find something interesting, relevant, and thought-provoking.

If there are issues that arise during the meeting, or if you have thoughts for enhancing future meetings, do not hesitate to contact any committee member. We wish you a pleasant and productive day at APCAM!

Sincerely,
The APCAM 2019 Organizing Committee

Timothy L. Hubbard (Chair)
Laura Getz (Co-Chair)
J. Devin McAuley
Kristopher (Jake) Patten
Peter Q. Pfordresher
Julia Strand

Back to top of page

Meeting Room Maps

The spoken sessions and business meeting will take place on the fifth floor of the convention center in Room 519A.
The poster session will take place down the hall in Room 517B. Posters for APCAM will begin with board #101 and continue through #151.


Back to top of page

Full Schedule

Presentation titles are clickable links to the associated abstract.

8:00 Registration
8:20 Opening and Welcome
General Auditory Processing
8:30 The Jingle and Jangle of Listening Effort Julia Strand (Carleton College)
8:45 The predictive brain: Signal informativeness modulates human auditory cortical responses Amour Simal (University of Montréal)
Patrick Bermudez (University of Montréal)
Christine Lefebvre (University of Montréal)
François Vachon (Laval University)
Pierre Jolicoeur (University of Montréal)
9:00 Incidental Auditory Learning and Memory-guided Attention: A Behavioural and Electroencephalogram (EEG) Study Manda Fischer (University of Toronto)
Morris Moscovitch (University of Toronto)
Claude Alain (University of Toronto)
9:15 Acoustic features of environmental sounds that convey actions Laurie Heller (Carnegie Mellon University)
Asadali Sheikh (Carnegie Mellon University)
9:30 Morning Break (15 minutes)
Music Perception and Production
9:45 Sung improvisation as a new tool for revealing implicit tonal knowledge in congenital amusia Michael W. Weiss (University of Montréal)
Isabelle Peretz (University of Montréal)
10:00 Physiological markers of individual differences in musicians' performance rates Shannon Eilyce Wright (McGill University)
Caroline Palmer (McGill University)
10:15 A Meta-analysis of Timbre Spaces Comparing Attack Time and the Temporal Centroid of the Attack Savvas Kazazis (McGill University)
Philippe Depalle (McGill University)
Stephen McAdams (McGill University)
10:30 On Computationally Modelling the Perception of Orchestral Effects using Symbolic Score Information Aurelien Antoine (McGill University)
Philippe Depalle (McGill University)
Stephen McAdams (McGill University)
10:45 Music & Consciousness: Shifting Representations in Memory for Melodies W. Jay Dowling (University of Texas at Dallas)
11:00 Poster Setup
11:10 Poster Session
General Auditory Processing (101-107)
101 Using survival analysis to examine the impact of auditory distraction on the time-course of insight versus analytic solutions in problem solving John E. Marsh (University of Central Lancashire)
Emma Threadgold (University of Central Lancashire)
Melissa E. Barker (University of Central Lancashire)
Damien Litchfield (Edge Hill University)
Linden J. Ball (University of Central Lancashire)
102 Confidence, learning, and metacognitive accuracy track auditory ERP dynamics Alexandria Zakrzewski (Kansas State University)
Natalie Ball (University at Buffalo)
Destiny Bell (Kansas State University)
Kelsey Wheeler (Kansas State University)
Matthew Wisniewski (Kansas State University)
103 Impacts of pre-stimulus brain states on auditory performance Matthew Wisniewski (Kansas State University)
104 Where are my ears? Impact of the point of observation on the perceived direction of a moving sound source Mike Russell (Washburn University)
Bryce Strickland (Washburn University)
Gabrielle Kentch (Washburn University)
105 A tree falls in a forest. Does it fall hard or soft? Perception of contact type and event title on observer judgments Carli Herl (Washburn University)
Mike Russell (Washburn University)
106 Does auditory looming bias exist? Influence of participant position and response method on accuracy of arrival judgments Gabrielle Kentch (Washburn University)
Mike Russell (Washburn University)
107 Influence of Recording Device Position on Judgments of Pedestrian Direction Across Two Dimensions Bryce Strickland (Washburn University)
Mike Russell (Washburn University)
Multisensory Processing (108-114)
108 The Pupillary Dilation Response to Auditory Deviations: A Matter of Acoustical Novelty? Alessandro Pozzi (Laval University)
Alexandre Marois (Laval University)
Johnathan Crépeau (Laval University)
Lysandre Provost (Laval University)
François Vachon (Laval University)
109 Is Auditory Working Memory More Demanding than Visual? Joseph Rovetti (Ryerson University)
Huiwen Goy (Ryerson University)
Frank Russo (Ryerson University)
110 Influence of ambiguous pitch-class information on sound-color synesthesia Lisa Tobayama (The University of Tokyo)
Sayaka Harashima (The University of Tokyo)
Kazuhiko Yokosawa (The University of Tokyo)
111 Plasticity in color associations evoked by musical notes in a musician with synesthesia and absolute pitch Cathy Lebeau (Université du Québec à Montréal)
François Richer (Université du Québec à Montréal)
112 The influence of timbre of musical instruments and ensembles on an associative color-sound perception Elena Nikolaevna Anisimova (Waldorf School Backnang)
113 External validation of the Battery for the Assessment of Sensorimotor Auditory and Timing Abilities (BAASTA) with the Montreal-Beat Alignment Test (M-BAT) for the assessment of synchronization to music Ming Ruo Zhang (University of Montréal)
Véronique Martel (University of Montréal)
Isabelle Peretz (University of Montréal)
Simone Dalla Bella (University of Montréal)
114 Tablet version of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA) Hugo Laflamme (University of Montréal)
Mélody Blais (University of Montréal)
Naeem Komeilipoor (University of Montréal)
Camille Gaillard (University of Montréal)
Melissa Kadi (University of Montréal)
Agnès Zagala (University of Montréal)
Simon Rigoulot (University of Montréal)
Sonja Kotz (University of Maastricht & Max Planck Institute for Human Cognitive and Brain Sciences)
Simone Dalla Bella (University of Montréal)
Auditory Pathology (115-119)
115 Effects of Auditory Distraction on Cognition and Eye Movements in Adults with Mild Traumatic Brain Injury Ileana Ratiu (Midwestern University)
Miyka Whiting (Midwestern University)
116 Head Injuries and the Hearing Screening Inventory Chris Koch (George Fox University)
Abigail Anderson (George Fox University)
117 Can rhythmic abilities distinguish neurodevelopmental disorders? Camille Gaillard (University of Montréal)
Mélody Blais (University of Montréal)
Frédéric Puyjarinet (University of Montpellier)
Valentin Bégel (McGill University)
Régis Lopez (National Reference Center for Narcolepsy and Idiopathic Hypersomnia)
Madeline Vanderbergue (University of Lille)
Marie-Pierre Lemaître (University Hospital of Lille)
Quentin Devignes (University of Lille)
Delphine Dellacherie (University of Lille)
Simone Dalla Bella (University of Montréal)
118 A Case Study in Unilateral Pure Word Deafness Colin Noe (Rice University)
Simon Fischer-Baum (Rice University)
119 Auditory Attentional Modulation in Hearing Impaired Listeners Jason Geller (University of Iowa)
Inyong Choi (University of Iowa)
Development (120-123)
120 Bilingual and monolingual toddlers can detect mispronunciations, regardless of cognate status Esther Schott (Concordia University)
Krista Byers-Heinlein (Concordia University)
121 Does the structure of the lexicon change with age? An investigation using the Auditory Lexical Decision Task Kylie Alberts (Union College)
Brianne Noud (Washington University in St. Louis)
Jonathan Peelle (Washington University in St. Louis)
Chad Rogers (Union College)
122 The temporal dynamics of intervention for a child with delayed language Schea Fissel (Midwestern University, Glendale)
Ida Mohebpour (Midwestern University, Glendale)
Ashley Nye (Midwestern University, Glendale)
123 Investigating the melodic pitch memory of children with autism spectrum disorder Samantha Wong (McGill University)
Sandy Stanutz (McGill University)
Shalini Sivathasan (McGill University)
Emily Stubbert (McGill University)
Jacob Burack (McGill University)
Eve-Marie Quintin (McGill University)
Musical Features (124-130)
124 Bowed plates and blown strings: Odd combinations of excitation methods and resonance structures impact perception Erica Huynh (McGill University)
Joël Bensoam (IRCAM)
Stephen McAdams (McGill University)
125 Evaluation of Musical Instrument Timbre Preference as a Function of Materials Jacob Colville (James Madison University)
Michael Hall (James Madison University)
126 The Spatial Representation of Pitch in Relation to Musical Training Asha Beck (James Madison University)
Michael Hall (James Madison University)
Kaitlyn Bridgeforth (James Madison University)
127 The Pythagorean Comma and the Preference for Stretched Octaves Timothy L. Hubbard (Arizona State University and Grand Canyon University)
128 The Root of Harmonic Expectancy: A Priming Study Trenton Johanis (University of Toronto)
Mark Schmuckler (University of Toronto)
129 WITHDRAWN—Perception of Beats and Temporal Characteristics of the Auditory Image Sharathchandra Ramakrishnan (University of Texas at Dallas)
130 Knowledge of Western popular music by Canadian- and Chinese-born university students: the extent of common ground Annabel Cohen (University of Prince Edward Island)
Corey Collett (University of Prince Edward Island)
Jingyuan Sun (University of Prince Edward Island)
Music Performance (131-134)
131 Generalization of Novel Sensorimotor Associations among Pianists and Non-pianists Chihiro Honda (University at Buffalo, SUNY)
Karen Chow (University at Buffalo, SUNY)
Emma Greenspon (Monmouth University)
Peter Pfordresher (University at Buffalo, SUNY)
132 WITHDRAWN—Singing training improves pitch accuracy and vocal quality in novice singers Dawn Merrett (The University of Melbourne)
Sarah Wilson (The University of Melbourne)
133 Musical prodigies practice more during a critical developmental period Chanel Marion-St-Onge (University of Montréal)
Megha Sharda (University of Montréal)
Michael W. Weiss (University of Montréal)
Margot Charignon (University of Montréal)
Isabelle Peretz (University of Montréal)
134 A case study of prodigious musical memory Margot Charignon (University of Montréal)
Chanel Marion-St-Onge (University of Montréal)
Michael Weiss (University of Montréal)
Isabelle Héroux (Université du Québec à Montréal)
Megha Sharda (University of Montréal)
Isabelle Peretz (University of Montréal)
Speech and Language (135-145)
135 What Accounts for Individual Differences in Susceptibility to the McGurk Effect? Lucia Ray (Carleton College)
Naseem Dillman-Hasso (Carleton College)
Violet Brown (Washington University in St. Louis)
Maryam Hedayati (Carleton College)
Annie Zanger (Carleton College)
Sasha Mayn (Carleton College)
Julia Strand (Carleton College)
136 Fricative/affricate perception measured using auditory brainstem responses Michael Ryan Henderson (Villanova University)
Joseph Toscano (Villanova University)
137 Vowel-pitch interactions in the perception of pitch interval size Frank Russo(Ryerson University)
Dominique Vuvan (Skidmore College)
138 Interpretation of pitch patterns in spoken and sung speech Shannon Heald (University of Chicago)
Stephen Van Hedger (University of Western Ontario)
Howard Nusbaum (University of Chicago)
139 How binding words with a musical frame improves their recognition Agnès Zagala (University of Montréal)
Séverine Samson (University of Lille)
140 Exploring Linguistic Rhythm in Light of Poetry: Durational Priming of Metrical Speech in Turkish Züheyra Tokaç (Bogazici University)
Esra Mungan (Bogazici University)
141 The effects of item and talker variability on the perceptual learning of time-compressed speech following short-term exposure Karen Banai (University of Haifa)
Hanin Karawani (University of Haifa)
Yizhar Lavner (Tel-Hai College)
Limor Lavie (University of Haifa)
142 The effect of interlocutor voice context on bilingual lexical decision Monika Molnar (University of Toronto)
Kai Leung (University of Toronto)
143 Manual directional gestures facilitate speech learning Anna Zhen (New York University Shanghai)
Stephen Van Hedger (University of Chicago)
Shannon Heald (University of Chicago)
Susan Goldin-Meadow (University of Chicago)
Xing Tian (New York University Shanghai)
144 Social Judgments about Speaker Confidence and Trust: A Computer Mouse-Tracking Paradigm Jennifer Roche (Kent State University)
Shae D. Morgan (University of Louisville)
Erica Glynn (Kent State University)
145 Women are Better than Men in Detecting Vocal Sex Ratios John Neuhoff (The College of Wooster)
Audition and Emotion (146-151)
146 Impact of Affective Vocal Tone on Social Judgments of Friendliness and Political Ideology Erica Glynn (Kent State University)
Thimberley Morgan (Kent State University)
Rachel Whitten (Kent State University)
Madison Knodell (Kent State University)
Jennifer Roche (Kent State University)
147 Emotive Attributes of Complaining Speech Maël Mauchand (McGill University)
Marc Pell (McGill University)
148 Effects of vocally-expressed emotions on visual scanning of faces: Evidence from Mandarin Chinese Shuyi Zhang (McGill University)
Pan Liu (McGill University; Western University)
Simon Rigoulot (McGill University; Université du Québec à Trois-Rivières)
Xiaoming Jiang (McGill University; Tongji University)
Marc Pell (McGill University)
149 WITHDRAWN—Perception of emotion in West African indigenous music Cecilia Durojaye (Max Planck Institute for Empirical Aesthetics)
150 Timbral, temporal, and expressive shaping of musical material and their respective roles in musical communication of affect Kit Soden (McGill University)
Jordana Saks (McGill University)
Stephen McAdams (McGill University)
151 Emotional responses to acoustic features: Comparison of three sounds produced by Ram’s horn (Shofar) Leah Fostick (Ariel University)
Maayan Cytrin (Ariel University)
Eshkar Yadgar (Ariel University)
Howard Moskowitz (Mind Genomics Associates, Inc.)
Harvey Babkoff (Bar-Ilan University)
12:40 Lunch Break (50 minutes)
Keynote Address
1:30 The extraordinary variations of the musical brain Isabelle Peretz (University of Montréal)
Speech Perception and Production
2:00 Working Memory Is Associated with the Use of Lip Movements and Sentence Context During Speech Perception in Noise in Younger and Older Bilinguals Alexandre Chauvin (Concordia University)
Natalie Phillips (Concordia University)
2:15 Inhibition of articulation muscles during listening and reading: A matter of modality of input and intention to speak out loud Naama Zur (University of Haifa)
Zohar Eviatar (University of Haifa)
Avi Karni (University of Haifa)
2:30 Cup! Cup? Cup: Comprehension of Intentional Prosody in Adults and Children Melissa Jungers (The Ohio State University)
Julie Hupp (The Ohio State University)
Celeste Hinerman (The Ohio State University)
2:45 Interrelations in Emotion and Speech Physiology Lead to a Sound Symbolic Case Shin-Phing Yu (Arizona State University)
Michael McBeath (Arizona State University)
Arthur Glenberg (Arizona State University)
3:00 Afternoon Break (15 minutes)
Multisensory and Developmental
3:15 Affective interactions: Comparing auditory and visual components in dramatic scenes Kit Soden (McGill University)
Moe Touizrar (McGill University)
Sarah Gates (Northwestern University)
Bennett K. Smith (McGill University)
Stephen McAdams (McGill University)
3:30 Auditory and Somatosensory Interaction in Speech Perception in Children and Adults Paméla Trudeau-Fisette (Université du Québec à Montréal)
Camille Vidou (Université du Québec à Montréal)
Takayuki Ito (Université du Québec à Montréal)
and Lucie Ménard (Université du Québec à Montréal)
3:45 Rhythmic determinants of developmental dyslexia Valentin Begel (Lille University)
Simone Dalla Bella (University of Montréal)
Quentin Devignes (Lille University)
Madeline Vanderbergue (Lille University)
Marie-Pierre Lemaitre (University Hospital of Lille)
Delphine Dellacherie (Lille University)
4:00 The effectiveness of musical interventions on cognition in children with autism spectrum disorder: A systematic review and meta-analysis Kevin Jamey (University of Montréal)
Nathalie Roth (University of Montréal)
Nicholas E. V. Foster (University of Montréal)
Krista L. Hyde (University of Montréal)
4:15-5:00 Business Meeting (APCAM, APCS, and AP&C)

Talk Abstracts

General Auditory Processing

8:30am

The Jingle and Jangle of Listening Effort
Julia Strand (Carleton College)
Author email: jstrand@carleton.edu

There is increasing concern that findings in the psychological literature may not be as robust or replicable as previously thought. The "replication crisis” that has struck psychology and other sciences is likely to be the result of a host of factors including questionable research practices, the incentive structure of science, and the file-drawer problem (see Nelson et al. 2018 for a recent review). Another factor that may contribute to inconsistencies in the literature is variability in measurement (Flake & Fried 2019). Indeed, researchers may choose to operationalize constructs that cannot be directly measured (e.g., "musical skill," "happiness," "working memory") in a host of ways. This inconsistency in measurement has the potential to lead to what has been referred to as the jingle fallacy—falsely assuming two tasks or scales measure the same underlying construct because they have the same name—or the jangle fallacy—falsely assuming two tasks or scales measure something different because they have different names (see Block 2000; Flake & Fried 2019; Thorndike 1904). Here, we present recent research from our lab on the construct of Listening Effort—"the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a [listening] task" (Pichora-Fuller et al., 2016).This work demonstrates that multiple listening effort tasks that are commonly used in the literature and assumed to measure the same underlying construct are in fact only very weakly related to one another (demonstrating "jingle"). In addition, we show that measures that are intended to quantify listening effort may instead be tapping into more domain-general, cognitive abilities like working memory (demonstrating "jangle"). We also offer concrete suggestions for other researchers to evaluate the measurement of the constructs they study to help them avoid jingle and jangle in their own work.

Back to schedule

8:45am

The predictive brain: Signal informativeness modulates human auditory cortical responses
Amour Simal (University of Montréal)
Patrick Bermudez (University of Montréal)
Christine Lefebvre (University of Montréal)
François Vachon (Laval University)
Pierre Jolicoeur
Author email: amour.simal@umontreal.ca

Rapid, pre-attentive prediction, coding of prediction error, and disambiguation of stimuli seems to be of particular importance in the auditory domain. We aimed to observe how information is carried in a tone sequence, and how it modulates cortical responses. To do this, we used a standard short-term memory (STM) task. Participants heard two sequences of 1, 3, or 5 tones (200 ms on, 200 ms off) interspersed by a silent interval (2 s). They decided whether the two sequence were the same or different. In a first experiment, the length of the tone sequences was randomized between trials. During the first sequence, the amplitude of the auditory P2 was larger for the second tone in trials with 3 tones, and for the second and fourth tones in trials with 5 tones. We hypothesize the increase in P2 reflected a dynamic disambiguation process because these tones were predictive of a sequence longer than 1 or 3 tones. This hypothesis was supported by the absence of P2 amplitude modulation during the second sequence (when sequence length was already known). In a second experiment, we blocked trials by sequence length to ensure the effects were not caused by some process related to encoding in STM. There was no P2 amplitude modulation in either the first or second sequences. Thus, tones 2 and 4 had a larger amplitude only when they provided new information about the length of the current tone sequence. To some extent, the auditory N1 also showed those modulations. These results suggest a rapid dynamic adaptation of auditory cortical responses related to contextually-determined disambiguating information on a very short timescale.

Back to schedule

9:00am

Incidental Auditory Learning and Memory-guided Attention: A Behavioural and Electroencephalogram (EEG) Study
Manda Fischer (University of Toronto)
Morris Moscovitch (University of Toronto)
Claude Alain (University of Toronto)
Author email: manda.fischer@mail.utoronto.ca

Can implicit (non-conscious) associations facilitate auditory target detection? Participants were presented with 80 different audio-clips of familiar sounds, in which half of the clips included a lateralized pure tone. Participants were only told to classify audio-clips as natural (i.e. waterfall) or manmade (i.e. airplane engine). After a delay, participants took a surprise memory test in which they were presented with old and new audio-clips and asked to press a button to detect a lateralized faint pure tone (target) embedded in each audio-clip. On each trial, they also indicated if the clip was (i) old or new; (ii) recollected or familiar; and (iii) if the tone was on the left, right, or not present when they heard the audio-clip prior to the test. The results show good explicit memory for the clip, but not for the tone location or tone presence. Target detection was also faster for old clips than for new clips but did not vary as a function of the association between spatial location and audio-clip. Neuro-electric activity at test, however, revealed an old-new effect at midline and frontal sites as well as a significant a difference between clips that had been associated with the location of the target compared to those that were not associated with it. These results suggest that implicit associations were formed that facilitate target processing irrespective of location. The implications of these findings in the context of memory-guided attention are discussed.

Back to schedule

9:15am

Acoustic features of environmental sounds that convey actions
Laurie Heller (Carnegie Mellon University) 
Asadali Sheikh (Carnegie Mellon University)
Author email: laurieheller@cmu.edu

How do humans use acoustic information to understand what events are occurring in their environment? Although there are countless environmental sounds, there exist a manageable number of types of physical interactions that produce sounds, such as impacts, scrapes and splashes (e.g. Gaver, 1993). To address how listeners may detect and utilize the classes of causal interactions that produce sounds, listeners classified a wide variety of sound events according to their actions in two experiments. In the first experiment, listeners judged a variety of small-scale, human-generated events made with solids, liquids, and/or air (Heller and Skerritt, APCAM, Boston, MA, 2009; the Sound Events Database, http://www.auditorylab.org). In the second experiment, listeners judged a wider variety of human-, animal- and machine-generated events (ESC-50 Environmental Sound Database, Piczak, Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 1015-1018, 2015. DOI 10.1145/2733373.2806390.) In both experiments, listeners were asked to assess the causal likelihood of the sounds being created by various action-related verbs. Cluster analyses reveal how the sounds were organized with respect to the actions they convey to listeners. Acoustic analysis of the sounds incorporated spectral features (e.g. harmonic partials), temporal features(e.g. modulation rate), and spectro-temporal features (e.g. frequency variation of spectral peaks across repeated transients). First, an effort was made to connect these acoustic features to physical causal processes that differentiated classes of sounds. Next, discriminant analyses showed which actions were most important in classifying and discriminating the sounds. Finally, a small set of acoustic variables were able to differentiate the behaviorally-derived sound groups, suggesting the possibility of a small set of higher-order auditory features that typify each causal sound-producing event.

Back to schedule

Music Perception and Production

9:45am

Sung improvisation as a new tool for revealing implicit tonal knowledge in congenital amusia
Michael W. Weiss (University of Montréal)
Isabelle Peretz (University of Montréal)
Author email: michael.william.weiss@gmail.com

Individuals with congenital amusia have notorious difficulties detecting out-of-key notes. A recent model of amusia proposes that this deficit is caused by disruption of conscious access to tonal knowledge, rather than absence of tonal knowledge (Peretz, 2016, TiCS). Interestingly, some amusics sing well-known songs in tune. In one case, an amusic improvised melodies that were mostly in-key, although not the intended task (Peretz, 1993, Cog. Neuropsych.). The case described by Peretz (1993) is noteworthy because, by definition, an improvisation does not rely on a melodic template, and any observed tonality cannot be attributed to rehearsal or feedback from others. In the current study, we examined improvisations by asking a group of amusics to sing novel melodies in response to verbal prompts or in continuation of melodic stems (n=28 improvisations; M=44.1±33.8 notes/improvisation). Data collection is ongoing, but includes 14 amusics and 8 nonmusician controls. In order to assess the extent to which each improvisation reflects internal representation of key membership, we calculated the proportion of in-key notes (PIK) in the nearest key (Krumhansl-Schmuckler), which was z-transformed using a randomly-generated null distribution (zPIK). For each participant, we calculated the proportion of improvisations that respected a given key better than chance (i.e., zPIK>0). The proportion was above chance (i.e., proportion of 0.5) for 12 of 14 amusics (M=0.62±0.15) and 7 of 8 controls (M=0.80±0.21). On a group level, the average zPIK score was greater than chance (i.e., zPIK of 0) for both amusics (M=0.61±0.95) and controls (M=1.90±1.50), and controls had a higher average zPIK score than amusics. The results confirm that amusics have acquired tonal knowledge without awareness but that this knowledge may be partial compared to typical nonmusicians. This fulfills a key prediction of Peretz (2016) and demonstrates the utility of sung improvisation as a method to study implicit musical knowledge.

Back to schedule

10:00am

Physiological markers of individual differences in musicians' performance rates
Shannon Eilyce Wright (McGill University)
Caroline Palmer (McGill University)
Author email: shannon.wright2@mail.mcgill.ca

Spontaneous motor production rates are observed when individuals perform music. Individual differences in these spontaneous rates exist, yet factors that contribute to these individual differences remain largely unidentified. Biological oscillations such as circadian and cardiac rhythms can be influenced by music performance tasks. This study investigates whether physiological markers can explain individual differences in musicians' spontaneous production rates. Twenty-four trained pianists performed melodies at four testing sessions in a single day (09, 13h, 17h, 21h). At each testing session, 5-minute baseline measurements of heart rate (RR or beat-to-beat intervals) were recorded; heart rate was also measured during piano performance. Pianists performed a familiar melody and an unfamiliar melody at regular, spontaneous (unpaced) rates at each testing session. Chronotype measures (morning or evening type) were collected at the final session. Spontaneous performance rates were slowest at 09h in the sample of primarily evening chronotypes. Temporal variability did not change significantly across the testing times. Performance rates within individuals were largely stable across testing sessions, with faster performers remaining faster across the day and slower performers remaining slower across the day. Heart rate was faster and heart rate variability was reduced during music performances relative to baseline measurements. Autorecurrence quantification analysis on the RR heart beat intervals indicated more deterministic patterns in cardiac fluctuations during music performance than during baseline. Moreover, individual pianists' timing variability in performances of the unfamiliar piece correlated with greater determinism in cardiac rhythms. These results suggest that performance rates are stable across the day and that dynamic patterns in cardiac rhythms may differentiate individuals' performances based on their temporal features.

Back to schedule

10:15am

A Meta-analysis of Timbre Spaces Comparing Attack Time and the Temporal Centroid of the Attack
Savvas Kazazis (McGill University)
Philippe Depalle (McGill University)
Stephen McAdams (McGill University)
Author email: savvas.kazazis@mail.mcgill.ca

Attack time (or rise time) is considered to be one of the most important temporal audio features for explaining dissimilarity ratings between pairs of sounds. In several timbre studies, it has been shown to correlate strongly with one perceptual dimension revealed by multidimensional scaling analysis (MDS) on the dissimilarity ratings. An ordinal scaling experiment was conducted to test whether listeners are able to rank order harmonic stimuli with varying attack times and temporal centroids of the attack (i.e., the center of gravity of the amplitude envelope of a signal during the attack portion). The attack times had a range of 40 to 500 ms, but the decay times were kept constant. Each stimulus set of attack temporal centroids was constructed with fixed attack (range = 40-500 ms) and decay times, but they varied in the shape of the amplitude envelope during the attack portion. Although there were many confusions in ordering stimuli with short attack times, in most cases listeners could correctly order stimuli with varying temporal centroids even at very short attack times. These results led us to conduct a meta-analysis of six timbre spaces (Grey, 1977; Grey & Gordon, 1978; Iverson & Krumhansl, 1993; McAdams et al., 1995; Lakatos, 2000) in which we compared the explanatory power of attack time with the temporal centroid of the attack. The analysis showed that the attack temporal centroid correlated with a given MDS dimension equally or stronger than attack time itself, and that in some cases the differences between the two correlation coefficients were significant (p < 0.05). With respect to the results of the ordinal scaling experiment, the meta-analysis of timbre spaces indicates that the attack temporal centroid is a robust audio feature for explaining dissimilarity ratings along a temporal perceptual dimension.

Back to schedule

10:30am

On Computationally Modelling the Perception of Orchestral Effects using Symbolic Score Information
Aurelien Antoine (McGill University)
Philippe Depalle (McGill University)
Stephen McAdams (McGill University)
Author email: aurelien.antoine@mcgill.ca

Orchestration is a musical practice that involves combining the acoustic properties of several instruments. This results in creating perceptual effects that are prominent in orchestral music, such as blend, segregation, and orchestral contrasts to name but three. Here, we present our approach for modelling the perception of orchestral effects that result from three auditory grouping processes, namely concurrent, sequential, and segmental grouping. Moreover, we discuss how findings from research on auditory scene analysis and music perception experiments informed the formulation of rules for the development of computer models capable of detecting and identifying such perceptual effects from the only symbolic information of musical score. Based on the modelling of orchestral blend as a case study, we detail the benefits and performances of applying such rules to the processing of musical information. The models obtained an average accuracy score of 81%, established by comparing the computer output of our models on several orchestral excerpts to annotations defined by musical experts. However, we also explain the limitations of using rules based solely on symbolic information. Combining instrumental characteristics inherently comprises manipulating timbral properties that are essentially not represented in symbolic information of scores, highlighting the need to perform and incorporate data resulting from audio analyzes in order to completely grasp the parameters responsible for the perception of orchestral effects.

Back to schedule

10:45am

Music & Consciousness: Shifting Representations in Memory for Melodies
W. Jay Dowling (University of Texas at Dallas)
Author email: jdowling@utdallas.edu

We have a very vivid experience of music hear for the first time. However, a growing body of convergent research shows that our memory for it varies within the first minute after our experience, and that the memories of nonmusicians, moderately trained musicians, and professional musicians differ. All three groups hear familiar melodies in terms of scale-step values. However, only listeners with musical training automatically hear the pitches of unfamiliar melodies as scale steps, and that encoding takes time of the order of 10 s after first hearing. When tested 4-5 s after hearing a melody followed by continuing music, both nonmusicians and moderate musicians confuse targets with lures in which the melody has been shifted to a different place on the scale (with changes in pitch-to-pitch intervals). If that test is delayed another 10 s, both groups reject the same-contour lures better, and that performance is especially good with the moderate musicians. This result converges with the failure of both groups to notice out-of-key wrong notes in unfamiliar melodies; even if they were encoding scale-step values, that process is too slow to provide a timely response time to an out-of-key note. Furthermore, noticing a tonal modulation supposes the tracking of relative strength of scale-step values moment to moment. When listeners track modulations in both Western and South Indian melodies using the continuous-probe-tone paradigm, nonmusicians’ tonal profiles generally fail to show the effects of the modulations. Moderate musicians show them, and professionals show them even more strongly. But even for professionals, the shift in the tonal profiles continues to grow even after the first 15 s of the modulation. Taken together, this evidence suggests that even professionals will remember hearing a piece with firmly grounded tonal structure, even though their initial immediate experience of the piece lacked that structure.

Back to schedule

Keynote Address

1:30pm

The extraordinary variations of the musical brain
Isabelle Peretz (International Laboratory for Brain, Music, and Sound Research (BRAMS), University of Montréal)
Author email: isabelle.peretz@umontreal.ca
Visit Dr. Peretz's website here

The past decade of research has provided compelling evidence that musicality is a fundamental human trait, and its biological basis is increasingly scrutinized. In this endeavor, the detailed study of individuals who have musical deficiencies--congenital amusia--or exceptional talents--musical prodigies--are instructive because of likely neurogenetic underpinnings. I will review key points that have emerged during recent years regarding the neurobiological foundations of these extraordinary variations of musicality.

Back to schedule

Speech Perception and Production

2:00pm

Working Memory Is Associated with the Use of Lip Movements and Sentence Context During Speech Perception in Noise in Younger and Older Bilinguals
Alexandre Chauvin (Concordia University)
Natalie Phillips (Concordia University)
Author email: alexandre.bchauvin@gmail.com

Despite ubiquitous background noise, most people perceive speech successfully. This may be explained in part by a combination of supporting mechanisms. For example, visual speech cues (e.g., lip movements) and sentence context typically improve speech perception in noise. While the beneficial nature of these cues is well documented in native listeners, comparatively little is known for non-native listeners who may have less developed linguistic knowledge in their second language and are typically more impaired by background noise. Furthermore, older bilinguals may be at a disadvantage, as they also have to manage sensory changes such as presbycusis and/or changes in visual acuity. We are investigating the extent to which French-English/English-French bilinguals in Montreal benefit from visual speech cues and sentence context in their first (L1) and second language (L2). Participants were divided into three groups: young adults (18-35 years), older adults (65+) with normal hearing, and older adults with hearing loss. All participants were presented with audio-video recorded sentences in noise (twelve-talker babble); they had to repeat the terminal word of each sentence. Half of the sentences offered moderate levels of contextual information (e.g., "In the mail, he received a letter.”; MC), while 50% offered little context (e.g., "She thought about the letter.”; LC). Furthermore, the sentences were presented in three modalities: visual, auditory, and audiovisual. Participants were more accurate in L1 compared to L2, and for MC sentences compared to LC sentences. However, all groups benefited from visual speech cues and sentence context to similar extents. This benefit, however, was differentially moderated by visual working memory performance in young adults compared to older adults with normal hearing, suggesting that working memory capacity plays a role in the extent to which bilinguals benefit from the combination of visual speech cues and sentence context and that this relationship changes across the lifespan.

Back to schedule

2:15pm

Inhibition of articulation muscles during listening and reading: A matter of modality of input and intention to speak out loud
Naama Zur (University of Haifa)
Zohar Eviatar (University of Haifa)
Avi Karni (University of Haifa)
Author email: naamazur@gmail.com ; zohare@research.haifa.ac.il ; avi.karni@yahoo.com

In both hand movements and articulation, intended acts can affect the way current acts are executed. Thus, the articulation of a given speech sound is often contingent on the intention to produce subsequent sounds (co-articulation). Here we show that the intention to subsequently repeat a short sentence, overtly or covertly, significantly modulated the articulatory musculature already during listening or reading (i.e., during the input phase) the sentence. Young adults were instructed to read (presented as whole sentences or word-by-word) or listen to recordings of sentences to be repeated afterward. Surface electromyography (sEMG) recordings showed significant reductions in articulatory muscle activity, above the orbicularis-oris and the sternohyoid muscles, compared to baseline, during the input phase. These temporary reductions in EMG activity were contingent on the intention to subsequently repeat the input overtly or covertly, but also on the input modality. Thus, there were different patterns of activity modulations before the cue to respond, depending on the input modality and the intended response. Only when repetition was to be overt, a significant build-up of activity occurred after sentence presentation, before the cue to respond; this build-up was most pronounced when the sentence to be repeated was heard. Neurolinguistic models suggest that language perception and articulation interact; the current results suggest that the interaction begins already during the input phase, listening or reading, and reflects the intended responses.

Back to schedule

2:30pm

Cup! Cup? Cup: Comprehension of Intentional Prosody in Adults and Children
Melissa Jungers (The Ohio State University)
Julie Hupp (The Ohio State University)
Celeste Hinerman (The Ohio State University)
Author email: jungers.2@osu.edu

Prosody is the way something is spoken. Adults and children regularly use prosody in language comprehension and production, but much research focuses on emotion or on syntactic interpretation. The current study focuses on comprehension of intentional prosody devoid of semantic information. Adults (n = 72) and children (n = 72) were asked to identify the referent of an isolated label (familiar or nonsense words) based on the varying prosodic information alone. Labels were spoken with intentional prosody (warn, doubt, name). Adults and children selected the intended referent at above chance levels of performance for both word types and all intentions, with adults performing faster and more accurately than children. Overall, this research demonstrates that children and adults can successfully determine the intention behind words and nonsense words, even if they are in isolation. This adds support to the idea that prosody alone can convey meaning.

Back to schedule

2:45pm

Interrelations in Emotion and Speech Physiology Lead to a Sound Symbolic Case
Shin-Phing Yu (Arizona State University)
Michael McBeath (Arizona State University)
Arthur Glenberg (Arizona State University)
Author email: shinphin@asu.edu

Two experiments explore how emotion physiology can affect language iconicity. We examined how patterns of facial muscle activity (FMA) may lead to associating affect with specific phonemic utterances. We measured the increase in perception of positive affect for the /i:/ vowel sound (as in "gleam”) and negative affect for the /^/ sound (as in "glum") for non-words (NWs). English single-syllable NWs containing /i:/ or /^/ were rated on valence -5, -3, -1, +1, +3, or +5. We found /i:/ NWs were rated significantly above 0, and /^/ NWs significantly below 0, confirming a bidirectional emotional association. We then tested whether verbal articulation of NWs containing these vowel sounds with confirmed emotional valence ratings corresponded with consistent affective FMA. We recorded FMA using EMG while participants read NWs. Regression analysis revealed a relation between valence rating of the NWs and FMA. Our confirmation of a relationship between emotion and speech physiology can explain a type of sound symbolism. FMA associated with particular emotions appears to favor production of specific phonemic sounds. The findings support that human physiology associated with emotions likely affects word meanings in a non-arbitrary manner.

Back to schedule

Multisensory and Developmental

3:15pm

Affective interactions: Comparing auditory and visual components in dramatic scenes
Kit Soden (McGill University)
Moe Touizrar (McGill University)
Sarah Gates (Northwestern University)
Bennett K. Smith (McGill University)
Stephen McAdams (McGill University)
Author email: kit.soden@mcgill.ca

Links between music and emotion in dramatic art works (film, opera or other theatrical forms) have long been recognized as integral to a work's success. A key question concerns how the auditory and visual components in dramatic works correspond and interact in the elicitation of affect. We aim to identify and isolate the components of dramatic music that are able to clearly represent basic emotional categories using empirical methods to isolate and measure auditory and visual interactions. By separating visual and audio components, we are able to control for, coordinate, and compare their individual contributions. With stimuli from opera and film noir, we collected a rich set of data (N=120) using custom software that was developed to enable participants to successively record real-time emotional intensity, to create event segmentations and to apply overlapping, multi-level affect descriptor labels to stimuli in audio-only, visual-only and combined audiovisual conditions. Our findings suggest that intensity profiles across conditions are similar, but that the auditory-only component is rated stronger than the visual-only or audiovisual components. Descriptor data show congruency in responses based on six basic emotion categories and suggest that the audio-only component elicits a wider array of affective responses, whereas the visual-only and audiovisual conditions elicit more consolidated responses. These data will enable a new type of musical analysis based entirely on emotional intensity and emotion category, with applications in music perception, music theory, composition, and musicology.

Back to schedule

3:30pm

Auditory and Somatosensory Interaction in Speech Perception in Children and Adults
Paméla Trudeau-Fisette (Université du Québec à Montréal)
Camille Vidou (Université du Québec à Montréal)
Takayuki Ito (Université du Québec à Montréal)
Lucie Ménard (Université du Québec à Montréal)
Author email: ptrudeaufisette@gmail.com

Multisensory integration allows us to link sensory cues from multiple sources and plays a crucial role in speech development. However, it is not clear whether humans have an innate ability or whether repeated sensory input while the brain is maturing leads to efficient integration of sensory information in speech. We investigated the integration of auditory and somatosensory information in speech processing in a bimodal perceptual task in 15 young adults (age 19 to 30) and 14 children (age 5 to 6). The participants were asked to identify if the perceived target was the sound /e/ or /ø/. Half of the stimuli were presented under a unimodal condition with only auditory input. The other stimuli were presented under a bimodal condition with both auditory input and somatosensory input consisting of facial skin stretches provided by a robotic device, which mimics the articulation of the vowel /e/. The results indicate that the effect of somatosensory information on sound categorization was larger in adults than in children. This suggests that integration of auditory and somatosensory information evolves throughout the course of development.

Back to schedule

3:45pm

Rhythmic determinants of developmental dyslexia
Valentin Begel (Lille University)
Simone Dalla Bella (University of Montréal)
Quentin Devignes (Lille University)
Madeline Vanderbergue (Lille University)
Marie-Pierre Lemaitre (University Hospital of Lille)
Delphine Dellacherie (Lille University)
Author email: valentinbegel@gmail.com

Temporal accounts of Developmental Dyslexia (DD) postulate that a general predictive timing impairment plays a critical role this disease. However, DD is characterized by timing disorders as well as cognitive and motor dysfunctions. It is still unclear whether non-verbal timing and rhythmic skills per se might be good predictors of DD. This study investigated the independent contribution of timing to developmental dyslexia (DD) beside motor and cognitive dysfunctions typically impaired in DD. We submitted children with DD (aged 8-12) and controls to perceptual timing, finger tapping, fine motor control, as well as attention and executive tasks. Children with DD's performance was poorer than controls in most of these tasks. Predictors of DD, as found with logistic regression modeling, were beat perception and precision in tapping to the beat, which are both predictive timing variables, children's tapping rate, and mental flexibility. These data support temporal accounts of DD in which predictive timing impairments related to dysfunctional brain oscillatory mechanisms partially explain the core phonological deficit, independently from general motor and cognitive functioning. On top of this, we provide strong evidence that DD's deficits in predictive timing are not a by-product of other timing (i.e., perception of durations) or global dysfunction in DD.

Back to schedule

4:00pm

The effectiveness of musical interventions on cognition in children with autism spectrum disorder: A systematic review and meta-analysis
Kevin Jamey (University of Montréal)
Nathalie Roth (University of Montréal)
Nicholas E. V. Foster (University of Montréal)
Krista L. Hyde (University of Montréal)
Author email: kevin.jamey@umontreal.ca

There is considerable interest in using music interventions (MIs) to address core impairments present in children and adolescents with autism spectrum disorder (ASD). An increasing number of studies suggest that MIs have positive outcomes in this population, but no systematic review employing meta-analysis to date has investigated the efficacy of MIs across three of the predominant symptoms in ASD, specifically social functioning, maladaptive behaviors and language impairments. A systematic evaluation was performed across 17 peer-reviewed studies comparing MIs with non-music interventions (NMIs) in ASD children. Quality assessment was also undertaken based on the CONSORT statement. Eleven studies fulfilled inclusion criteria for meta-analysis, and these quantitative analyses results supported the effectiveness of MI in ASD, particularly for measures sensitive to social maladaptive behaviors. Comparisons further suggested benefits of MIs over NMIs for social outcomes, but not non-social maladaptive or language outcomes. Methodological issues were common in studies, such as small sample sizes, restricted durations and intensities of interventions, missing sample information and matching criteria, and attrition bias. Together, the combined systematic review and meta-analyses presents an up-to-date evaluation of the evidence for MI's benefits in ASD children. Key recommendations are provided for future clinical interventions and research on MI in ASD.

Back to schedule

Poster Abstracts

General Auditory Processing (101-107)

Poster 101

Using survival analysis to examine the impact of auditory distraction on the time-course of insight versus analytic solutions in problem solving
John E. Marsh (University of Central Lancashire)
Emma Threadgold (University of Central Lancashire)
Melissa E. Barker (University of Central Lancashire)
Damien Litchfield (Edge Hill University)
Linden J. Ball (University of Central Lancashire)
Author email: jemarsh@uclan.ac.uk; lball@uclan.ac.uk

We report a study that applied survival analysis to examine theory-based predictions relating to the impact of auditory distraction on the time-course of insight versus analytic solutions in problem solving with both verbal and visuo-spatial tasks. The study involved a within-participants design that manipulated task type (problems typically solved via insight vs. ones typically solved via analysis), solution modality (verbal vs. visuo-spatial) and auditory distraction (to-be-ignored background speech vs. quiet). When participants offered a solution they also self-reported their solution strategy (i.e., whether it was more insight-based or analysis-based). We also took measures of working memory capacity as exploratory variables: verbal (operation span) and spatial (symmetry span). Our survival analysis shows how the time-course of solution generation via insight versus analysis is impacted by auditory distraction in ways that are predicted by problem-solving theories that recognise the interplay between: (i) implicit restructuring processes; and (ii) explicit executive processes.

Back to poster listing

Poster 102

Confidence, learning, and metacognitive accuracy track auditory ERP dynamics
Alexandria Zakrzewski (Kansas State University)
Natalie Ball (University at Buffalo)
Destiny Bell (Kansas State University)
Kelsey Wheeler (Kansas State University)
Matthew Wisniewski (Kansas State University)
Author email: aczakrzewski@ksu.edu

Recent research has focused on measuring neural correlates of metacognitive judgments in decision and post-decision processes during memory retrieval and categorization. However, many tasks may require monitoring of earlier sensory/perceptual processes. In the ongoing research described here, we are examining the neural correlates of confidence, learning, and metacognition in simple auditory tasks. In Study 1, participants indicated which of two intervals contained an 80-ms pure tone embedded in white noise. Tone-locked event-related potentials (ERPs) were used to investigate the processing stages related to confidence. N1, P2, and P3 amplitudes were larger for high- compared to low-confidence trials, indicating that processing at relatively early (N1) and late (P3) stages are associated with confidence judgments. Study 2 examined whether improvements in auditory detection and plasticity in the ERP, as a result of perceptual learning, were associated with changes in confidence ratings. As in Study 1, participants were trained to detect either an 861-Hz or 1058-Hz tone in noise. EEG was recorded during the presentation of trained and untrained frequency tones during active detection and passive exposure to sounds. During the active detection portion, accuracy, confidence ratings, P2, and P3 amplitudes were higher for trained compared to untrained tones. Further, the P2 amplitude effect remained, even under passive exposure to trained and untrained tones. Study 3 examines the relationship between metacognitive accuracy (meta-d') and the ERP features found to be associated with confidence in Studies 1 and 2. Current data support the trend that meta-d' tracks individual differences in ERPs. Participants with better metacognitive accuracy show larger overall amplitude differences between high- and low-confidence trials. We suggest that metacognitive judgments can track both sensory- and decision-related processes as well as perceptual learning. Additionally, differences in how the brain processes auditory signals may predict one's ability to monitor their own performance through confidence judgments.

Back to poster listing

Poster 103

Impacts of pre-stimulus brain states on auditory performance
Matthew Wisniewski (Kansas State University)
Author email: mgwisniewski@ksu.edu

Pre-stimulus brain states modulate performance in perceptual tasks. Evidence largely comes from electroencephalogram (EEG) recordings showing that performance is correlated with EEG phase prior to stimulus onset. Here, it is explored whether or not: 1) spontaneous oscillations predict performance, 2) learning is associated with adjustments of pre-stimulus phase, and 3) temporal dynamics of stimuli determine the frequencies at which effects are seen. In Experiment 1, participants heard multiple 40-ms sinusoidal tones separated by short silent intervals. The task was to indicate the tone frequency pattern (e.g., low-low-high). Unlike prior work, no background sounds were employed to entrain or induce EEG oscillations. Pre-stimulus 7-10 Hz phase was found to differ between correct and incorrect trials ~200 to 100 ms prior to tone-pattern onset. Further, after sorting trials into bins based on phase, accuracy showed a clear cyclical trend. In Experiment 2, the same task was used except that a brief noise burst served as a warning signal 3 s before stimulus onset. Tone pattern identification improved over 2 days of training. Though there were increases in post-stimulus phase consistencies, and a replication of pre-stimulus effects from Experiment 1, there was no clear indication that learning was associated with increased pre-stimulus phase consistency. In Experiment 3, the rate at which tones were presented in the stimulus was manipulated. Though performance was once again correlated with pre-stimulus phase, the temporal dynamics of the stimulus had no clear impact on the frequency at which these effects were observed. This data adds to a growing body of evidence that oscillatory brain states impact auditory perception. Further, the work shows that these states do not need to be induced, and may be robust to changes in the temporal dynamics of stimuli and tasks.

Back to poster listing

Poster 104

Where are my ears? Impact of the point of observation on the perceived direction of a moving sound source
Mike Russell (Washburn University)
Bryce Strickland (Washburn University)
Gabrielle Kentch (Washburn University)
Author email: mike.russell@washburn.edu

One of the major hallmarks of James J. Gibson's (1979) ecological approach to perception is the notion the environment structures the energy within it. For each location in space, the energy arriving at any given location is defined by the objects in the environment and the location of the energy source. As one would imagine, each location within a setting is unique and a change in location will result in a change in the energy available at that location. Thus, the light contacting the eye is dependent on the arrangement of objects, the position of an observer, and the energy source. In brief, perception is based, in part, on the point of observation. Since then, a number of studies have found that visual judgments are significantly influenced by the particular height at which we see the world. Manipulations of eye height have significantly affected observer judgments of gap size, surface height, and distance, for example. If where we see the world from (i.e., eye height) affects visual judgments of the world, then it can be expected that where we hear from (i.e., ear height) influences auditory judgments. The present study investigated the impact of "observer" location on the ability to accurately judge the direction of a moving sound source. More specifically, participants were exposed to audio recordings of a pedestrian who was walking either up and down a set of stairs. The recordings were made from either ear, waist, or ankle height. In a second experiment, comparisons were made between egocentric and allocentric points of observation. The participant's task was to simply report the pedestrian's direction of motion. The findings are discussed in terms of the importance of the point of observation in affecting auditory perception of movement direction, in particular, and auditory spatial perception, in general.

Back to poster listing

Poster 105

A tree falls in a forest. Does it fall hard or soft? Perception of contact type and event title on observer judgments
Carli Herl (Washburn University)
Mike Russell (Washburn University)
Author email: carli.herl@washburn.edu; mike.russell@washburn.edu

The interaction of two objects or materials has the capacity to produce sound. The result of the interaction not only creates awareness an event has transpired, it also has the capacity to inform observers about the form or particulars of the event. It has been shown that observers are able to use the resulting acoustic event to make categorical distinctions (e.g., bouncing vs. breaking, sex of pedestrian, shape of a struck object) and metrical judgments (e.g., hardness of the striking object, length of a dropped object, the area of the struck object) of the object involved. With respect to vision, tau is believed to be informative about time-to-contact and time-to passby (i.e., the time at which an object will reach the point of observation). In terms of audition, there has been considerable debate as to the extent to which tau is informative. Controversy aside, tau still remains a potential avenue of research in terms of its derivation (henceforth referred to as tau-dot). Tau-dot provides the opportunity for perceivers to judge the severity of collision (hard or soft) between two objects. Participants in Experiment 1 were exposed to 8 distinct events, with each event differing in the force used to create that event. Participants judged (on a scale of 0 to 10) the perceived severity of the contact. Experiment 2 was identical, but with half the participants being provided a very brief description of the event. The results revealed that events deemed hard and soft were essentially perceived as such. It was also determined that the different events were not judged equivalently. Interestingly, participant awareness of the object involved in the event significantly affected perception of contact intensity. Discussion will be given to developing a coherent theory of event perception as well as future avenues for research.

Back to poster listing

Poster 106

Does auditory looming bias exist? Influence of participant position and response method on accuracy of arrival judgments
Gabrielle Kentch (Washburn University)
Mike Russell (Washburn University)
Author email: gabrielle.kentch@washburn.edu; mike.russell@washburn.edu

When participants are instructed to predict the time-to-contact of a looming object (an object approaching in depth), past literature has supported the existence of an auditory looming bias (e.g., Neuhoff, 1998; Rosenblum et al., 1987; Schiff & Oldak, 1990). A looming bias occurs when participants report the object as having arrived at the point of observation, when in reality, it is still some distance away. It has been theorized that humans perceive looming objects as closer than they truly are to facilitate survival by allowing time for evasive action to be made (e.g., Haselton & Nettle, 2006; Neuhoff, 2001). Studies supporting the existence of such phenomena suggest a significant amount of slippage between an observer's perception of a moving object's position and the position of the object in reality. However, in the vast majority of these studies, participants were exposed to recorded or virtually simulated stimuli, through the use of screens and speakers. They were also instructed to use a non-ecological (i.e., non-action-based) response method to indicate the position of the object (e.g. pressing a key on a keyboard). To address the potential of methodological error, the aim of the present study was to measure the effects of auditory looming bias on action-based participant responses to a live event. Using only auditory information, participants were instructed to perform either an action-based response (interception or evasion) or a non-action-based response (pressing a key or verbal response). The event participants responded to was a ball rolling down a track. Participants were seated such that the ball rolled toward or past them. Accuracy of response was measured at time-to-contact and time-to-pass-by as well as between the four response methods. The findings will be discussed in terms of the difference between action and perception as well as the impact of the point of observation.

Back to poster listing

Poster 107

Influence of Recording Device Position on Judgments of Pedestrian Direction Across Two Dimensions
Bryce Strickland (Washburn University)
Mike Russell (Washburn University)
Author email: bryce.strickland@washburn.edu; mike.russell@washburn.edu

A number of studies involving auditory motion perception have been conducted, but are limited to one-dimensional changes in object position. However, in natural settings, sound producing sources vary in distance, azimuth, and altitude. Individuals perceive approaching sound sources as becoming louder and receding sound sources as becoming quieter. Only one known study to date focused on motion perception of object position in two dimensions. Johnston and Russell (2017) discovered that participants were approximately 95% accurate at judging motion in the horizontal dimension versus only 65% accurate at judging motion in the vertical dimension. The researchers failed to examine the impact that the location of the recording device had on perceptual judgments. The sound created by the pedestrian's feet contacting the ground (stairs) will be altered by the relative positions of the pedestrian and sound recording device. It is expected that variations in the position of the recording device will translate into different aspects of the acoustic energy array being detected. In the present study, participants were exposed to six sounds of three variations of audio recording device location (bottom, middle, and top of a staircase). The task of pedestrian was to identify whether the pedestrian approached or receded, and whether the pedestrian ascended or descended the staircase. The results suggest that recording at different locations capitalizes on our ability to make use of acoustic structure, such as reverberation, which is informative about the direction of a moving sound source.

Back to poster listing

Multisensory Processing (108-114)

Poster 108

The Pupillary Dilation Response to Auditory Deviations: A Matter of Acoustical Novelty?
Alessandro Pozzi (Laval University)
Alexandre Marois (Laval University)
Johnathan Crépeau (Laval University)
Lysandre Provost (Laval University)
François Vachon (Laval University)
Author email: alessandro.pozzi.1@ulaval.ca

The occurrence of a sound that deviates from the recent auditory past tends to cause an involuntary diversion of attention. Such attentional capture, which originates from an acoustic-irregularity sentinel detection system, generally leads to performance impairment on the ongoing cognitive task. Recent studies showed that this attentional response can be indexed by a rapid pupillary dilation response (PDR). However, the PDR has been observed exclusively following the presentation of a (deviant) sound that induced an acoustical change. This raises the question as to whether the PDR is a product of the novelty of the capturing event or its violation of learned expectancies based on any invariance characterizing the auditory background. The present study aimed to determine whether the PDR to a deviant sound could be elicited in the absence of acoustic novelty. To do so, subjects completed a visual serial recall task while ignoring an irrelevant auditory sequence. Standard sequences consisted of two alternating spoken letters (e.g., A B A B A B). A deviation was induced by either inserting a new letter (deviant change; A B A B X B) or by repeating one of the two letters (deviant repetition; A B A B B A). Recall performance was poorer in the presence of any type of deviant. More importantly, both letter change and letter repetition elicited a significant PDR. By demonstrating that a pupillary response was triggered in the absence of acoustic novelty, this study suggests that the PDR truly indexes attentional capture and that it is underpinned by higher-order, expectancy-violation detection cortical processes.

Back to poster listing

Poster 109

Is Auditory Working Memory More Demanding than Visual?
Joseph Rovetti (Ryerson University)
Huiwen Goy (Ryerson University)
Frank Russo (Ryerson University)
Author email: joseph.rovetti@ryerson.ca; russo@psych.ryerson.ca

Working memory (WM) involves the storage and manipulation of information. A common task used to assess WM capacity is the n-back, which places a greater load on WM as the value of n increases. It is well-known that dorsolateral prefrontal cortex (DLPFC) activation increases as n increases, but only three studies have compared brain activation during the n-back as a function of stimulus modality (i.e., auditory or visual). These studies varied with regard to neuroimaging methods and load conditions employed. The earliest study, a positron emission tomography (PET) study of the 3-back, found no effect of stimulus modality on brain activation. In contrast, two subsequent functional magnetic resonance imaging (fMRI) studies of the 2-back found that DLPFC activation was greater during the auditory condition. The fMRI findings were interpreted as evidence that auditory WM places greater demand on the central executive than visual WM. In the current study, our aim was to assess two explanations for these discrepant findings: (1) whether the effect of stimulus modality on DLPFC activation is WM load-dependent, and (2) whether the effect of stimulus modality on DLPFC activation may have been driven by fMRI scanner noise. To do this, 16 younger adults completed an n-back with visual stimuli and one with auditory stimuli, both at four levels of WM load: 0-back (control), 1-back (easy), 2-back (medium), and 3-back (hard). Concurrently, activation of the DLPFC was measured using functional near-infrared spectroscopy, a quiet neuroimaging method. We found that DLPFC activation increased with WM load, but was not affected by stimulus modality at any WM load. This supports the earlier view obtained in the PET study that WM is modality-independent, and suggests that the two fMRI findings of greater DLPFC activation in the auditory n-back may have been caused by scanner noise making this condition more difficult.

Back to poster listing

Poster 110

Influence of ambiguous pitch-class information on sound-color synesthesia
Lisa Tobayama (The University of Tokyo)
Sayaka Harashima (The University of Tokyo)
Kazuhiko Yokosawa (The University of Tokyo)
Author email: tobayama@l.u-tokyo.ac.jp

Synesthesia is a phenomenon in which particular sensory inputs elicit atypical perceptions in addition to the standard perception. Several studies have shown that cognitive processes are essential for synesthesia. In grapheme-color synesthesia, an ambiguous graphemic stimulus can induce different synesthetic colors when it is interpreted as a digit or a letter (Myles et al., 2003; Dixon et al. 2006). We investigated the influence of ambiguous pitch class information on sound-color synesthesia. Participants who were sound-color synesthetes (N = 12) listened to narrowband noises with a different pitch and a bandwidth of 400 cents that included five continuous pitch classes such that the pitch-class information was ambiguous (Fujisaki & Kashino, 2005). One month later, the participants engaged in the same task under two conditions: (1) the pitch class name was shown; and (2) the pitch class name was absent (identical to the first day). Presented pitch class names were within the bandwidth of the noise (specifically, lower or higher pitch class from the central one). We examined whether physically identical sounds elicited different synesthetic colors by changing the participants' interpretations of the pitch class. The result differed among synesthetes based on their absolute pitch ability. The synesthetic colors were significantly more different for pitch-class-presented condition than for pitch-class-absent condition from one month ago in synesthetes that performed well in the pitch-class naming task, whereas synesthetic colors were not significantly different among conditions in the other synesthetes. It is concluded that the influence of cognitive pitch-class interpretation could be observed even in sound-color synesthesia. However, this finding might be valid only for synesthetes with absolute pitch ability, who might usually associate synesthetic colors with pitch class information.

Back to poster listing

Poster 111

Plasticity in color associations evoked by musical notes in a musician with synesthesia and absolute pitch
Cathy Lebeau (Université du Québec à Montréal)
François Richer (Université du Québec à Montréal)
Author email: lebeau.cathy@courrier.uqam.ca

Developmental synesthesia is a neurological condition in which certain perceptions or cognitions trigger supplementary perceptions (e.g. sounds evoke specific colors). Like absolute pitch, tone-color synesthesia is associated with enhanced music processing and increased connectivity in auditory cortex. The two conditions also show phenotypic and genotypic overlap as well as early childhood acquisition. Here, we report the case of NTM, a 27-years-old violinist with absolute pitch and automatic synesthetic associations between musical notes and specific colors both acquired in early childhood. Her synesthesia was validated by a standardized synesthesia battery. At age 23, NTM switched from classical (A440 tuning standard) to baroque (A415 tuning standard) music learning and experienced a serious incongruence in her synesthesia because of the semitone difference. Absolute pitch often interferes with adaptation to a new tuning standard but for NTM, her synesthetic color associations made the interference so intense that it prevented her from playing for a whole semester. Using voluntary training (coloring the scores with sound-congruent colors), NTM succeeded in learning new synesthesic color associations to notes in order to play baroque music. She also retained the initial set of color associations to notes in the A440 standard and could voluntarily switch from one set of associations to the other. The new synesthetic training also changed her letter-color synesthetic associations. This rare case underscores the plasticity of associations in tone-color synesthesia even in adults.

Back to poster listing

Poster 112

The influence of timbre of musical instruments and ensembles on an associative color-sound perception
Elena Nikolaevna Anisimova (Waldorf School Backnang)
Author email: mus.strahl@googlemail.com

It has been investigated correlation of audio-visual perception. The goal of this investigation is finding of common tendencies of correlation between auditory perception and visual associations. Materials and Methods: it has been developed and conducted the test for this studying. 110 respondents were tested. The test was developed based on the cognitive features of auditory and visual perception. Along with other audio-visual associations, were researched the influence of timbre of musical instruments and ensembles on the occurrence of audio-visual associative correlations during the music perception. Results and Discussion: The following general conclusions about the influence of the timbre of the instrument on the occurrence of color associations can be made: Musical instruments with a rich timbre are associated with dark colors. Musical instruments with a clear timbre are associated with cool and light colors. Musical instruments with a distinct sound are associated with colors which have a high value of lightness. The following tendency was obtained during the study of the influence of the music ensembles on the associative relationship with colors: Clearer, more powerful and intense sounds of the musical composition are associated with clearer and more intense colours. The complicated sounds of jazz orchestras, choirs and electronic music are associated with complex and secondary colours. Saturated sounds are associated with colours with low value of lightness. The obtained data on the relationships will be presented in a poster in tabular form. Therefore, the study of the results reveals the following conclusions: cognitive perception has a determining role in associative audio-visual relationships. Associative relationships can be explained through the recognition of patterns at the cognitive level.

Back to poster listing

Poster 113

External validation of the Battery for the Assessment of Sensorimotor Auditory and Timing Abilities (BAASTA) with the Montreal-Beat Alignment Test (M-BAT) for the assessment of synchronization to music
Ming Ruo Zhang (University of Montréal)
Véronique Martel (University of Montréal)
Isabelle Peretz (University of Montréal)
Simone Dalla Bella (University of Montréal)
Author email: zhang.ming.ruo1@gmail.com; veronique.martel.6@umontreal.ca

Tapping to the beat of a rhythmic stimulus, either music or a metronome, is a well-known task to measure rhythmic skills. Several musical synchronization tasks have been developed, for instance as part of recent batteries of rhythmic tests, such as the Battery for the Assessment of Sensorimotor Auditory and Timing Abilities (BAASTA) and the Montreal-Beat Alignment Test (M-BAT). Here synchronization to the beat is tested with different kinds of musical stimuli: BAASTA uses computer-generated classical music with a fixed tempo (100 bpm), while M-BAT contains more ecological music stimuli (i.e., real performances) that vary in terms of tempo (82bpm-170bpm) and genre (Merengue, Rock, Jazz, etc.). These two batteries have never received external validation, which was the aim of the present study. Forty-five participants performed the synchronization tasks from the two batteries and an unpaced tapping task (from BAASTA) to assess their spontaneous tempo. We calculated participants' synchronization consistency (i.e., how participants were variable in aligning their taps to the beat) for both synchronization tasks, and their spontaneous tempo (mean inter-tap interval; ITI) with the unpaced tapping task. Moreover, we computed motor variability, namely the coefficient of variation of the mean inter-tap interval (CV ITI). The results show a strong correlation between the synchronization consistency of the two batteries, despite the difference in complexity of the stimuli. This correlation is also independent of motor variability and spontaneous tempo, and there is no correlation between the participants' motor variability for the two batteries. This indicates that both tasks assess audio-motor coupling with music, and not general motor variability. This study provides support for the external validity of both batteries to assess synchronization to music.

Back to poster listing

Poster 114

Tablet version of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA)
Hugo Laflamme (University of Montréal)
Mélody Blais (University of Montréal)
Naeem Komeilipoor (University of Montréal)
Camille Gaillard (University of Montréal)
Melissa Kadi (University of Montréal)
Agnès Zagala (University of Montréal)
Simon Rigoulot (University of Montréal)
Sonja Kotz (University of Maastricht & Max Planck Institute for Human Cognitive and Brain Sciences)
Simone Dalla Bella (University of Montréal)
Author email: hlaflamme16@gmail.com; simone.dalla.bella@umontreal.ca

Perceptual and sensorimotor timing skills can be fully assessed with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery is a reliable tool for evaluating timing and rhythm skills revealing high sensitivity for individual difference. We present a recent implementation of BAASTA as an app on a tablet device. Using a mobile device ensures portability of the battery while maintaining excellent temporal accuracy in recording the performance in perceptual and motor tests. BAASTA includes 9 tasks (four perceptual and five motor). Perceptual tasks are duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Test. Production tasks involve unpaced tapping, paced tapping (with tones and music), synchronization continuation, and adaptive tapping. Normative data obtained with the tablet version of BAASTA in a group of healthy non-musicians are presented, and profiles of perceptual and sensorimotor timing skills are detected using machine-learning techniques. The relation between the identified timing and rhythm profiles in non-musicians and general cognitive functions (working memory, executive functions) is discussed. These results pave the way to establishing thresholds for identifying timing and rhythm capacities in the general and affected populations.

Back to poster listing

Auditory Pathology (115-119)

Poster 115

Effects of Auditory Distraction on Cognition and Eye Movements in Adults with Mild Traumatic Brain Injury
Ileana Ratiu (Midwestern University)
Miyka Whiting (Midwestern University)
Author email: iratiu@midwestern.edu

Traumatic brain injury (TBI) impacts millions of individuals each year. Following a TBI, individuals may experience physiological symptoms, such as oculomotor dysfunction, and cognitive symptoms, such as deficits in memory, attention, and higher order cognitive abilities. There is limited research on the impact of auditory distraction on functional task performance (e.g., reading) following a TBI. This study examined effect of auditory distraction on memory and reading performance in adults with and without mild traumatic brain injury (mTBI). Twenty-six healthy controls and 23 adults with a history of mTBI completed a short-term memory task, a working memory task, and an academic reading comprehension task. All tasks were administered with and without auditory distraction. Participants' eye movements were tracked during the academic reading comprehension task. Compared with healthy controls, individuals with mTBI recalled fewer items on a short-term memory task in the presence of auditory distraction, but not the working memory task. On the academic reading comprehension task, individuals with mTBI performed worse than healthy controls in the presence of auditory distraction, but only on specific types of content. Eye movement patterns corroborated the behavioral data and revealed that individuals with mTBI experienced greater difficulty than healthy controls in the presence of auditory distraction on specific types of content. These findings indicate that experiencing a mTBI may have lasting effects on cognitive abilities, particularly abilities that are recruited for functional tasks, such as academic reading.

Back to poster listing

Poster 116

Head Injuries and the Hearing Screening Inventory
Chris Koch (George Fox University)
Abigail Anderson (George Fox University)
Author email: ckoch@georgefox.edu

Head trauma can lead to problems with the ear and auditory pathway. These problems can involve tympanic membrane perforation, fragments in squamous epithelium, damage to the ossicles, or ischemia of the cochlear nerve. It is common for behavioral checklists, for concussion or head injuries, to include an item about hearing difficulty. In the present study, 152 introductory psychology students completed a survey in which they indicated if they had ever had a concussion or sustained a head injury. Approximately one-third (35.53%) of the sample had a history of head trauma. The Hearing Screening Inventory was also part of the survey. Overall, participants who had a previous head injury reported more hearing difficulties than participants with no previous head injury (t(150) = 2.15, p < .02). Although this difference had a moderate effect size (d = .37), it suggests that hearing difficulties may linger since participation was not limited to those having a recent head injury but was open to anyone who had a head injury at any point in time. An examination of specific hearing difficulties revealed that the difference between the two groups was based almost exclusively on their ability to distinguish target sounds from background noises. Specifically, the ability to understand words in music (t(150) = 2.36, p < .01; d = .40) and to isolate an individual speaking from background conversations (t(150) = 2.44, p < .01; d = .41) differentiated the two groups. This finding is consistent with Hoover, Souza, and Gallun (2017) who also found that head injury can impair target and noise processing.

Back to poster listing

Poster 117

Can rhythmic abilities distinguish neurodevelopmental disorders?
Camille Gaillard (University of Montréal)
Mélody Blais (University of Montréal)
Frédéric Puyjarinet (University of Montpellier)
Valentin Bégel (McGill University)
Régis Lopez (National Reference Center for Narcolepsy and Idiopathic Hypersomnia)
Madeline Vanderbergue (University of Lille)
Marie-Pierre Lemaître (University Hospital of Lille)
Quentin Devignes (University of Lille)
Delphine Dellacherie (University of Lille)
Simone Dalla Bella (University of Montréal)
Author email: camille.sakina.gaillard@gmail.com; meloblais@gmail.com; f.puyjarinet@hotmail.fr; valentin.begel@univ-lille3.fr; simone.dalla.bella@umontreal.ca

The majority of people can easily track the beat of simple and complex auditory sequences (e.g., a metronome or music) and move along with it. There is growing evidence that these rhythmic abilities are impaired in children with neurodevelopmental disorders, such as developmental dyslexia or ADHD. Rhythm impairments are shown with a variety of perceptual and sensorimotor measures. However, due to the heterogeneity across measures, it is unclear whether rhythmic difficulties are per se a hallmark of a particular disorder or rather the result of a common cognitive deficit in memory, attention, or executive functions. In this study, to test the possibility that profiles of rhythmic abilities might characterize neurodevelopmental disorders, we analyzed large sample of children with neurodevelopmental disorders who underwent the same rhythmic tests. Children (n = 50, with ADHD or dyslexia; n = 40 controls) were tested with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA), a battery assessing perceptual and sensorimotor timing abilities. We found that rhythm perception does not discriminate ADHD children from dyslexics. In contrast, ADHD children display higher motor variability than children with dyslexia and controls, as well as higher variability in synchronization, especially when ADHD have DCD (developmental coordination disorder) comorbidity. These differences persist when controlling for children's general cognitive functioning. These results will pave the way to new methods for identifying different rhythmic profiles in children with neurodevelopmental disorders and for developing individualized rhythm-based interventions.

Back to poster listing

Poster 118

A Case Study in Unilateral Pure Word Deafness
Colin Noe (Rice University)
Simon Fischer-Baum (Rice University)
Author email: colinmichaelnoe@gmail.com

Pure-word-deafness (PWD) is a disorder in which processing of speech sounds is impaired, but processing of non-speech sounds is comparatively intact. Many cases of PWD result from bilateral damage, but numerous cases of PWD from single hemisphere injury have been documented. How unilateral damage can cause pure-word-deafness has been an area of debate in the last decade. Many speech perception models assume bilateral processing of acoustic cues, sublexical representations, and even lexical processing. How, if speech processing is bilateral, can unilateral damage result in PWD? And what can PWD tell us about the lateralization and time-course of normal speech perception? In a single case study of a patient with unilateral PWD, we examine this question in detail. We measure which aspects of auditory perception, both linguistic and non-linguistic, are impaired in this patient and which are intact. To assess these impairments, we combine measures of speech comprehension, environmental sound comprehension, and electrophysiology of early acoustic cue encoding. This is the first study to combine electrophysiology with neuropsychology to study PWD. We find that in this case, PWD is attributable to damage to late stages of phonemic processing and/or to damage to the connections between sublexical and lexical representations. These results challenge theories of speech perception which assume no abstract level of representation since this is precisely where damage is found in this patient. These results also challenge bilateral theories of speech perception which claim PWD should only result from bilateral damage.

Back to poster listing

Poster 119

Auditory Attentional Modulation in Hearing Impaired Listeners
Jason Geller (University of Iowa)
Inyong Choi (University of Iowa)
Author email: jason-geller@uiowa.edu

Conversations often take place in sub-optimal (noisy) environments. In order to accurately perceive speech in noise (SiN), selective attention is needed, which involves attending to a task-relevant sound source while avoiding distraction from concurrent sounds. In hearing impaired listeners, the most commonly cited complaint is increased difficulty listening to speech in noisy environments. In the current study, we examine the role of selective attention in SiN performance by comparing normal hearing listeners with those who an assistive device (e.g., CIs or hearing aids). Assistive hearing devices are increasingly common approach for remediating severe to profound hearing loss and could potentially aid SiN performance by amplifying a target sound. In the current EEG study, 14 (n =14) patients with CIs and 10 (n = 10) normal hearing listeners completed a two-stream target detection task with a pre-trial (written) cue indicating which auditory stream ("Up" vs. "Down") to attend to. On each trial, the word "Up" was presented five times, with a female voice, while "Down" was repeated four times, with a male voice. In normal controls, we observed clear signs of attentional modulation. To cued targets, there was greater cortical auditory evoked responses. In assistive device users, however, there was inconsistent evidence for attentional modulation of cortical evoked responses. This suggests that poor auditory selective attention is one of limiting factors for assistive device users SiN difficulty.

Back to poster listing

Development (120-123)

Poster 120

Bilingual and monolingual toddlers can detect mispronunciations, regardless of cognate status
Esther Schott (Concordia University)
 Krista Byers-Heinlein (Concordia University)
Author email: esther.schott@mail.concordia.ca

Children need to learn words with enough phonological detail to detect subtle differences between similar-sounding words (e.g., ball/bowl), while also allowing some phonological variation to account for differences between speakers. Young children's ability to detect the difference between banana and boonana (a mispronunciation) helps assess phonological detail in early word representations. Monolinguals can detect mispronunciations from an early age (Swingley, 2005). Whether bilingual children show the same ability is unclear (Ramon-Casas et al., 2009; Wewalaarachchi et al., 2017). We investigate two hypotheses: First, bilingual infants might be less sensitive to mispronunciations, due to more variability in speech sounds in their bilingual input. Second, cognates, such as banana and banane [fr.] may be represented with less phonological detail (Ramon-Casas et al., 2010). Our study compared mispronunciation detection in monolingual and bilingual toddlers, and whether bilinguals' phonetic sensitivities would be similar for cognate and non-cognate words. We tested 26 English-French bilingual and 23 English monolingual toddlers (24-32 months) in a looking-while-listening eyetracking paradigm. Toddlers saw object pairs (e.g., table-banana) and heard one of the object labels pronounced correctly or mispronounced ("Look at the banana/boonana!"). All stimuli were in English, half of target words were cognates. Proportion target looking from 360ms-2000ms after target onset was measured. Separate 2 (pronunciation: correct, mispronounced) x 2 (cognate status: cognate, non-cognate) ANOVAs were conducted for monolinguals and bilinguals. For both groups, there was a significant main effect of pronunciation (bilinguals: p=0.049; monolinguals: p=.026). Both groups gazed more at the target for correctly pronounced than mispronounced words. We found no effect of cognate status for either group, and no interaction, indicating that both groups were equally sensitive to mispronunciations in cognate and non-cognate words. Together, these results underscore similarities between monolingual and bilingual development, and inform theories of how diverse types of experience impact language acquisition.

Back to poster listing

Poster 121

Does the structure of the lexicon change with age? An investigation using the Auditory Lexical Decision Task
Kylie Alberts (Union College)
Brianne Noud (Washington University in St. Louis)
 Jonathan Peelle (Washington University in St. Louis) 
 Chad Rogers (Union College)
Author email: rogersc@union.edu

As people grow older, and experience age-related hearing loss, the effort associated with the recognition of words is thought to increase (e.g., Rabbit, 1991; Suprenant, 2007). In the current work we examined whether features of the lexicon, our mental dictionary that we use to recognize words, change with age. Using an auditory version of the lexical decision task (ADLT), Goh, Suarez, Yap, and Hui Tan (2009) found in young adults that phonological neighborhood density (how many words sound akin to a given word) and word frequency (how often a word is used in its given language) contribute significantly and interact with how quickly a word is aurally recognized. They found that words more frequent in the English language were recognized more quickly than low frequency words. Their results also revealed an interaction such that word frequency effects were larger for words from sparse rather than dense phonological neighborhoods. In the current study we aimed to replicate the findings of Goh et al (2009) and examine whether they would extend to a sample of older adults experiencing hearing loss. Participants (ages 18-78) were split into two age groups: young and older adults. The participants completed the ADLT using the same set of words provided by Goh et al. (2009). We replicated effects of word frequency but not neighborhood density. Older adults were slower than young adults as expected (e.g., Salthouse, 1991), but no interactions between age and lexical variables persisted after converting reaction times to z-scores (Faust, Balota, Spieler, & Ferraro, 1999). Distributional analyses that estimated ex-Gaussian parameters revealed a similar pattern. We hold these results suggest a relative preservation of the structure of the lexicon in older adults, even in the face of hearing loss, but also point to the potential fragility of neighborhood density effects in the ALDT.

Back to poster listing

Poster 122

The temporal dynamics of intervention for a child with delayed language
Schea Fissel (Midwestern University, Glendale)
Ida Mohebpour (Midwestern University, Glendale)
Ashley Nye (Midwestern University, Glendale)
Author email: sfisse@midwestern.edu; imohebpour23@midwestern.edu; anye58@midwestern.edu

The dynamics of typical child language development involve adult-child interactions in which adult language input is naturally modified to scaffold child language learning from less complex states of variability/exploration, to more complex states showing increased stability and flexibility of language use. Children with delayed language development show reduced complexity and increased variability of language form and meaning, yet we know little about the temporal dynamics of delayed/disordered language development, or the mechanisms by which intervention, characterized by use of adult language input techniques, leads to more complex and stable child language use. At best, dynamical language interactions have largely been explored using linear, aggregative statistical methods that do not account for the temporal or organizational dynamics of variability out of which ordered stable language emerges. The purpose of the study was to explore the temporal dynamics of language development in a 3-year-old female diagnosed with moderate expressive language delay, and to describe the temporal organization of therapeutic interactions between the clinician and child’s language across 10, 1-hour sessions provided over three months. During intervention sessions, the clinician used an adapted form of auditory bombardment, by which adult language input (recasting, modeling) resulted in increased statistical regularity of language exemplars in form and meaning. This study used non-linear, recurrence quantification analysis (RQA), to describe and quantify the stability (determinism [%DET]) and complexity (rENTR) of a categorical child language time series (S) across 10 intervention sessions. Results showed gradually decreasing stability (97.84- 92.80% DET) as complexity (0.36-0.60 rENTR) increased; these results corresponded to linear indicators showing gradually increasing variability and language growth across intervention. Cross-RQA was used to describe clinician-child’s language interactions across intervention sessions, which revealed lexical synchronization that varied by child versus adult-led interactions. Results suggest the utility of using RQA/CRQA to explore delayed language development.

Back to poster listing

Poster 123

Investigating the melodic pitch memory of children with autism spectrum disorder
Samantha Wong (McGill University)
Sandy Stanutz (McGill University)
Shalini Sivathasan (McGill University)
Emily Stubbert (McGill University)
Jacob Burack (McGill University)
Eve-Marie Quintin (McGill University)
Author email: samantha.t.wong@mail.mcgill.ca

Short and long-term memory for individual tones and melody is enhanced among some musically untrained persons with autism spectrum disorder (ASD) (Bonnel et al., 2003; Heaton, Hermelin, Pring, 1998; Stanutz, Wapnick, & Burack, 2014). Furthermore, while children with ASD demonstrates enhanced memory for specific tones in a chord (Heaton, 2003), the ability to identify and distinguish tones and pitches is not as distinct in typically developing (TD) children (Cooper, 1995; Fancourt, Dick & Stewart, 2013). Absolute Pitch (AP) ability, pitch memory that is enhanced among musical savants with autism (Hermelin, 2001) offers a potential explanation for musical memory strength in children with ASD. The current project extends previous research (Heaton et al., 1998; Stanutz et al. 2014) indicating enhanced long-term melodic memory over a weeklong interval in children by investigating pitch memory specifically. We assessed the extent to which children with high-functioning ASD aged 8-12 years encode melody in long-term memory in the specific key that the melody was first heard in comparison to transposed versions of the melody. In preliminary analyses, the children with ASD (N = 7) showed more accurate long-term melodic memory (p < .05) than TD children (N = 16) matched on intelligence (as measured with the WASI-II) and auditory working memory (as measured with the Digit Span subtest in WISC-V), but no group difference were found in distinguishing the target melody from the transposed version of the melody (p > .05). These tentative findings may indicate that previously documented strengths in musical memory are attributable to melodic memory rather than pitch memory. Further data collection to increase sample size will provide stronger analytic power on the quality of pitch memory in the two groups. This will in turn help us build a stronger conclusion on the role of pitch memory in melodic memory.

Back to poster listing

Musical Features (124-130)

Poster 124

Bowed plates and blown strings: Odd combinations of excitation methods and resonance structures impact perception
Erica Huynh (McGill University)
Joël Bensoam (IRCAM)
Stephen McAdams (McGill University)
Author email: erica.huynh@mail.mcgill.ca; bensoam@ircam.fr; stephen.mcadams@mcgill.ca

What informational value does the timbre of a sound convey about its source? Any physical sound source set into vibration, such as a musical instrument, has mechanical properties. Two mechanical properties of musical instruments of interest are the excitation method and the resonance structure. They are closely related: the excitation method sets into vibration the resonance structure, which acts as a filter that amplifies, suppresses, and radiates sound components. We used Modalys, a digital physical modeling platform, to synthesize stimuli that simulate three excitation methods (bowing, blowing, striking) and three resonance structures (string, air column, plate), without the resulting sound necessarily being perceived as an existing musical instrument. We paired each excitation method with each resonance structure to produce nine excitation-resonator interactions. These interactions were either typical of acoustic musical instruments (e.g., bowed string) or atypical (e.g., struck air column). One group of listeners rated the extent to which the stimuli resembled bowing, blowing, or striking excitations (Experiment 1) and a second group rated the extent to which they resembled string, air column, and plate resonators (Experiment 2). Generally, listeners assigned the highest resemblance ratings to: (1) the excitations that actually produced the sound and (2) the resonators that actually produced the sound. These effects were strongest for stimuli representing typical excitation-resonator interactions. However, listeners confused different excitations or resonators for one another for stimuli representing atypical interactions. We address how perceptual data can inform physical modeling approaches, given that Modalys effectively conveyed excitations and resonators of typical but not atypical interactions. Our findings emphasize that our mental models for how musical instruments are played are very specific and limited to what we perceive in the physical world. We can then infer how novel sounds in the daily environment are incorporated into our mental models.

Back to poster listing

Poster 125

Evaluation of Musical Instrument Timbre Preference as a Function of Materials
Jacob Colville (James Madison University)
Michael Hall (James Madison University)
Author email: hallmd@jmu.edu

Preferences for particular (violin) timbres appear to depend upon materials and/or methods used in instrument construction (see Fritz, Curtin, Poitevineau, Morrel-Samuels, & Tao, 2012; Fritz, et al., 2014). The current investigation extended a similar evaluation to the soprano Bb clarinet. A musician produced strong exemplar tones for factorial combinations of mouthpieces (ABS plastic or hard rubber) with instrument bodies (African Blackwood, ABS plastic, or walnut) at three intended fundamental frequencies (F0: 220, 523, and 1,174 Hz). Tones were equated for duration and average amplitude. After a survey of musical training, fifteen listeners completed two tasks. First, to assess perceived preference, listeners judged how good each tone was as a clarinet [1 (very poor) - 7 (very good)]. Second, to evaluate perceptual distances between tones as a function of materials, listeners rated the similarity of tone pairs [1 (very dissimilar) - 7 (very similar)] at each F0. Judgments were influenced by mouthpiece/body combinations, but preferred sounds were often obtained from inexpensive materials. Specifically, tones from a plastic mouthpiece were judged better than those from a rubber mouthpiece for Blackwood and plastic instrument bodies. Similarity likewise varied with materials, resulting in (2-D) multi-dimensional scaling (MDS) solutions at each F0 that distributed stimuli similarly across one axis. This suggests that listeners relied on a shared acoustic characteristic across tones to judge similarity. Timbre relationships also varied with F0. MDS solutions revealed one particularly unique mouthpiece/body combination (220 Hz = plastic/walnut; 523 Hz = plastic/plastic; 1,174 Hz = rubber/plastic), and mean goodness decreased as F0 increased. These findings suggest that timbre should be evaluated as a function of pitch. Implications will be discussed, including for timbre and pitch interaction, acoustic predictors of similarity data, the relevance of reduced ranges of goodness for highly trained listeners, and concerns for generalizing findings across instruments.

Back to poster listing

Poster 126

The Spatial Representation of Pitch in Relation to Musical Training
Asha Beck (James Madison University)
Michael Hall (James Madison University)
Kaitlyn Bridgeforth (James Madison University)
Author email: hallmd@jmu.edu

The SMARC effect occurs when the speed to judge the direction of a pitch change across tones is influenced by task-irrelevant orientations of response keys, indicating a conflict with the internal mapping of pitch to visual space (see Rusconi, Kwan, Giordano, Umiltà, & Butterworth, 2006). While the effect has replicated, it is small (5-15 ms) and does not directly evaluate cross-modal mapping. The current investigation sought to establish a robust evaluation of this relationship while considering potential influences of musical training and a tone's perceived auditory location. After completing a survey concerning the extent of their musical training, forty listeners were familiarized with sawtooth waves constituting one octave from the A-minor scale; fundamental frequencies ranged from 110 to 220 Hz. These eight tones were presented at four positions in virtual auditory space (20 feet left/right, and up/down) by manipulating interaural time and level differences. On subsequent experimental trials listeners assigned the pitch of each random tone to an individually determined corresponding spatial location (Cartesian coordinates) using an 8x8 grid of response pads. Perceived auditory location impacted visual placement of pitches; tones heard on the left resulted in placement on the left, and tones reflecting greater intended elevation in auditory space resulted in higher visual locations. Across listeners, higher pitches increased visual coordinates, reflecting a diagonal relationship. Furthermore, slopes from individual functions slightly increased as years of ensemble training, formal lessons, or training on a primary instrument increased. These findings suggest that the mapping task can reveal individualized perceptual relationships between auditory pitch and visual space that change as a function of musical training and auditory location. They also imply that rapid reports of visual object locations may depend upon accompanying pitch information. For now, the results represent a more direct assessment of pitch-visual space relationships than traditional SMARC demonstrations.

Back to poster listing

Poster 127

The Pythagorean Comma and the Preference for Stretched Octaves
Timothy L. Hubbard (Arizona State University and Grand Canyon University)
Author email: timothyleehubbard@gmail.com

Two findings from music theory and music perception that are challenging to explain, and not previously considered related, are the Pythagorean comma and the preference for stretched octaves. The Pythagorean comma refers to differences between the frequencies of the octave of a scale based upon (a) successive applications of a 3:2 ratio (e.g., circle of fifths) and reduction to a single octave or (b) a 2:1 ratio. The ratio between an octave derived by successive fifths ([3:2]^12) and an octave derived by a 2:1 ratio (2^7) is 1.014. The preference for stretched octaves refers to findings that octaves tuned to a slightly larger ratio than 2:1 (e.g., 1220 cents) are preferred to octaves tuned to a 2:1 ratio (1200) cents, and the ratio between the preferred tuning and a 2:1 ratio is 1.017. The ratios of Pythagorean tuning (1.014), and of the stretched octave (1.017), to a 2:1 ratio are remarkably similar. The Pythagorean comma could be considered a mathematical derivation that predicts preference for stretched octaves if interval sizes are encoded in terms of reduction of successive fifths; additionally, this notion is consistent with findings that transitions between adjacent notes in musical compositions are usually a fifth or less and that the fifth is the most easily recognized component of a major chord. Also, a ratio of 1.01 – 1.02 would result in a slight dissonance or tension, which could contribute to musical aesthetics or dynamics. Relatedly, if interval size is encoded as pitch distance, and movement through that distance is implicitly encoded in representation of interval size, then a 1.01 - 1.02 ratio is consistent with overshooting of a 2:1 distance and is similar to overshooting of physical distance in studies of spatial representation.

Back to poster listing

Poster 128

The Root of Harmonic Expectancy: A Priming Study
Trenton Johanis (University of Toronto)
Mark Schmuckler (University of Toronto)
Author email: trenton.johanis@mail.utoronto.ca; marksch@utsc.utoronto.ca

Musical events connect to each other through relations among individual tones, chords and tonalities. These relations have been quantified empirically through the use of priming paradigms (Bharucha & Stoeckig, 1986), and modelled in neural net architectures (Collins, Tillmann, Barrette, Delbe & Janata, 2014). To date, such work has focused on exploring relations within a given musical level (e.g. tones connect with tones, chords connect with chords) and within a single modality (e.g., audition). The purpose of this project was to extend these boundaries by examining priming between across musical levels and sensory modalities. Specifically, this study explored whether (1) a single tone could prime a harmonically related chord; (2) a visual image of a chord could prime an auditory harmonically related chord; and (3) the co-occurrence of auditory (tone) and visual (chord) information would produce intersensory redundancy gains. To explore these questions, experienced musicians participated in a priming paradigm employing four priming conditions - auditory chord, auditory note, visual chord, and auditory note/visual chord primes. All primes were followed by harmonically related or unrelated auditory target chords, which were either in-tune or out-of-tune. Analyses of mean RTs of tuning judgments revealed a significant interaction between Relatedness and Tuning, with related chords processed more quickly than unrelated chords for in-tune-targets, but the reverse for out-of-tune targets. Interestingly, the lack of a three-way interaction between Priming Condition, Relatedness, and Tuning, suggested this priming pattern held across the four priming conditions. Follow-up analyses for each priming condition individually, however, revealed that this pattern was strongest for the auditory chord and note primes, but significantly weaker for the visual chord and audiovisual condition. Accordingly, these results demonstrate that priming is strong across musical event level, but weaker across sensory modalities, with no intersensory redundancy gains for combined auditory and visual information.

Back to poster listing

Poster 129

Perception of Beats and Temporal Characteristics of the Auditory Image
Sharathchandra Ramakrishnan (University of Texas at Dallas)
Author email: SharathChandraRam@utdallas.edu

This project investigates aspects of a temporal auditory image formed by mapping a linearly varying parameter (like the spatial image of distance from origin) to a varying rate of audible pulsing. The experiment manipulates pitch perception using the phenomenon of beating occurring between two interfering tones. Much research has been done on the acuity of the auditory image with respect to pitch, time and its interaction with executive functioning of attention and working memory. However, not as much research considers temporal aspects of auditory image formation. Previous research shows that auditory images extend in time and that they preserve information of melody. For example, in one experiment, imagery of a lyrical melody was scanned from the beginning with a response time varying with the distance from the starting lyric. Temporal image formation requires regulating an internal clock which is central to the formation of auditory images related to music perception. In an attempt to understand how temporal information is encoded into auditory imagery, researchers consider ubiquitous temporal image formation in tandem with cross-modal mechanisms like motored finger-tapping. Similar to varying rates of beeps heard while backing up a car equipped with an obstacle sensor, the technique developed by the author consists of pulsing beats with a variable rate produced between two tones close to each other in frequency. One tone is fixed and the other varies with increasing distance from the reference tone. Participants in a controlled setting showed surprising accuracy while recalling the pulsing temporal image to estimate its position on a linear slider. Additionally, the study investigates how the temporal perception of this auditory image is affected by varying the pitch of the carrier. These findings could have implications for real world applications like auditory displays, or other interfaces that use sound to represent information.

Back to poster listing

Poster 130

Knowledge of Western popular music by Canadian- and Chinese-born university students: the extent of common ground
Annabel Cohen (University of Prince Edward Island)
 Corey Collett (University of Prince Edward Island)
 Jingyuan Sun (University of Prince Edward Island)
Author email: cohen.annabel@gmail.com

Popular music preferences often distinguish generations (e.g., Krumhansl, 2017). Knowledge of popular music can take many forms, leading to several ways of testing (e.g., familiarity, identification of artist, title, year of popularity, mood of the music). Shared tastes in music can serve as the foundation of friendships, and lack of common ground can be isolating. The present study explored the extent of common musical ground between university students born in China and in Canada. 18 Chinese-born Mandarin/English bilingual students (mean age = 19.9 years) attending university in China completed an on-line Qualtrics questionnaire about Western popular music, previously employed in Canada (Cohen & MacLean, 2018). Participants rated familiarity and identified title, artist, and year of popularity of 25 Western hit songs popular between 1968 and 2017. Songs were cued by 10-sec audio excerpts. Familiarity was highest for songs popular since 2010, but dropped dramatically for all earlier songs. The same high familiarity for recent songs appeared for Canadian students in Canada, but familiarity declined gradually with decreasing recency and then increased for songs popular during their parents' youth, consistent with a "Reminiscence Bump" (Krumhansl & Zupnick, 2013). Sociopolitical and technological factors partially account for differences in musical exposure of students from Canada and China. A further small sample of Chinese University students attending university in Canada showed no evidence of any "catching up" as a result of the increased (approximately 2 years) Western experience. With increasing globalization and relaxing of regulations regarding communications in China, these cultural differences may decline in future; however, for now, the results suggest the importance of controlling for cultural background in music psychology studies and for greater acknowledgement of unfamiliarity with deeply rooted cultural knowledge that cannot be readily gained, possibly due to a critical period for acquisition of information about popular music (Cohen, 2000).

Back to poster listing

Music Performance (131-134)

Poster 131

Generalization of Novel Sensorimotor Associations among Pianists and Non-pianists
Chihiro Honda (University at Buffalo, SUNY)
Karen Chow (University at Buffalo, SUNY)
Emma Greenspon (Monmouth University)
Peter Pfordresher (University at Buffalo, SUNY)
Author email: chihiroh@buffalo.edu; pqp@buffalo.edu

In the process of acquiring musical skills, such as playing the piano, we develop sensorimotor associations which enable us to predict motor activities based on perceived pitch. However, past research suggests that these acquired associations are inflexible and show limited generalizability to novel task demands. Pfordresher and Chow (in press) had pianists and non-pianists learn melodies by ear based on a normal or inverted (lower pitch to the right) mapping of pitch. Pianists who were trained with inverted mapping were not better at recalling learned melodies during a later test phase compared to non-pianists, who performed similarly regardless of the pitch mapping that was used during training. These results suggest that musical training may constrain sensorimotor flexibility. The current study further investigates whether piano training constrains the ability to generalize learning based on an unfamiliar (inverted) pitch mapping, by using a transfer-of-training paradigm (Palmer & Meyer, 2000). As in Pfordresher and Chow (in press), pianists and non-pianists in the current study learned a training melody by ear with normal or inverted pitch mapping. After training, participants listened to and then immediately reproduced four types of melodies that varied in their similarity to the melody used during training: same, similar structure, inverted pitch pattern, or different structure. The feedback mapping during the generalization test matched training. Overall, pianists produced fewer errors and performed faster than non-pianists. However, benefits of training were absent for pianists who trained with inverted feedback when they attempted to reproduce a melody with a different structure than the melody used for training. This suggests that piano experience may constrain one's ability to generalize learning that is based on sensorimotor associations.

Back to poster listing

Poster 132

Singing training improves pitch accuracy and vocal quality in novice singers
Dawn Merrett (The University of Melbourne)
Sarah Wilson (The University of Melbourne)
Author email: dawnmerrett@gmail.com

Most people are able to sing with reasonable accuracy without any specific training, but singing abilities vary widely across the population. Innate ability appears to contribute to good singing, but some individuals also use singing training to improve their performance. Previous cross-sectional studies have shown mixed results when comparing the pitch accuracy and acoustic vocal qualities of trained and untrained singers, with some studies suggesting no advantage for trained singers. Longitudinal research has provided some evidence for the efficacy of singing training, but these studies have been conducted in children, poor pitch singers, vocal students, and choristers. It remains unclear whether singing training can improve accuracy and vocal quality in typical novice adult singers. In the current study, 14 adults with limited previous singing experience completed 12 weeks of singing training (daily practice videos plus weekly personalized feedback sessions), with comprehensive evaluations of singing accuracy and acoustic vocal parameters before and after training. The results show that training led to improved pitch accuracy both for trained material and untrained material (familiar song with lyrics, vocal pitch matching task), reduced variability in periodicity (jitter) and amplitude (shimmer), increased power in the upper harmonics (singing power ratio, known to reflect a desirable "ring” in the voice), and increased phonation time for single pitches (likely indexing better breath control). A cross-over experimental design was employed, with drama training as a control condition, and no significant differences were found during the drama training arms of the study, supporting the finding that singing training significantly influences vocal production. This study clearly demonstrates the value of singing training for improving multiple facets of the novice adult singing voice.

Back to poster listing

Poster 133

Musical prodigies practice more during a critical developmental period
Chanel Marion-St-Onge (University of Montréal)
Megha Sharda (University of Montréal)
Michael W. Weiss (University of Montréal)
Margot Charignon (University of Montréal)
Isabelle Peretz (University of Montréal)
Author email: chanel.marion-st-onge@umontreal.ca

Musical prodigies attain an outstanding level of achievement before adolescence. Indeed, becoming very talented in a domain such as music requires extensive practice. In the case of prodigies, this practice happens during childhood, a period during which the brain’s plasticity is high. Practice is not the sole factor; personality and intellectual abilities predict music training. In order to examine all these factors in the emergence of prodigiousness, we tested 19 adult prodigies and 35 non-prodigious musicians individually matched to prodigies on age, sex and years of musical practice. Sixteen of the non-prodigies were early-trained musicians (age of onset similar to their matched prodigy’s) and nineteen were late-trained musicians (age of onset ≥ 3 years older than the prodigy’s). Prodigies were defined as those with musical achievements before 14 years of age (e.g., winning a national competition, media appearance, etc.). Participants were administered an IQ battery (WAIS/WISC) and a personality questionnaire (Big Five Inventory), and self-reported hourly practice over the lifespan. The results revealed that the amount of accumulated practice did not differ between groups, but practice before the of 12 did, with prodigies accumulating more hours of practice than early-trained (even if they began training at the same time) and late-trained musicians. In other words, prodigies' practice was particularly concentrated during early development. Moreover, among prodigies and early-trained musicians, practice before 12 correlated moderately and positively with conscientiousness and processing speed, even though these two measures did not differentiate early-trained musicians from prodigies. Interactions between early intensive practice, and personal factors might facilitate fast learning and early high achievements in prodigies.

Back to poster listing

Poster 134

A case study of prodigious musical memory
Margot Charignon (University of Montréal)
Chanel Marion-St-Onge (University of Montréal)
Michael Weiss (University of Montréal)
Isabelle Héroux (Université du Québec à Montréal)
Megha Sharda (University of Montréal)
Isabelle Peretz (University of Montréal)
Author email: margotcharignon@gmail.com

Musical prodigies are musicians with outstanding achievements before adolescence. There are reports of exceptional memorization in musical prodigies, but they are primarily anecdotal. Here we tested a prodigy (male, 31 years, onset of guitar training = 12 years, total guitar experience = 17 years) who claimed the ability to rapidly learn and memorize difficult guitar pieces. In order to determine if his advantage was specific to his instrument, or even to music in general, the prodigy performed three types of tasks: (1) playing back an unfamiliar piece on the guitar immediately after a brief, freeform practice period (22.5 minutes), (2) learning short melodies vocally, and (3) two standardized working memory tasks (auditory digit-span and visual corsi board). In the first task, the guitarist was compared to 4 musicians of similar age, onset of guitar training, and years of experience on the guitar. The results showed that he was able to play back almost twice as many notes as the controls. Moreover, based on the blind judgment of a professor who specializes in guitar performance, his rendition was of better quality and musicality than the controls. In the vocal melody learning task and working memory tasks, the prodigy was compared to a larger sample of musicians (n=13) with a similar amount of musical experience on other instruments. The prodigy’s ability to learn melodies vocally was significantly greater than the group average, as was his performance on the auditory and visual working memory tasks. Collectively, these results document a superiority in memory that does not seem specific to his instrument, or even to music. In addition, the prodigy distinguished himself with qualitative elements of his performance. If present at an early age (as self-reported), these enhanced abilities to memorize, learn, and perform would contribute to early advancement and achievement.

Back to poster listing

Speech and Language (135-145)

Poster 135

What Accounts for Individual Differences in Susceptibility to the McGurk Effect?
Lucia Ray (Carleton College)
 Naseem Dillman-Hasso (Carleton College)
 Violet Brown (Washington University in St. Louis)
Maryam Hedayati (Carleton College)
Annie Zanger (Carleton College)
Sasha Mayn (Carleton College)
 Julia Strand (Carleton College)
Author email: violet.brown@wustl.edu; jstrand@carleton.edu

The McGurk effect is a classic audiovisual speech illusion in which discrepant auditory and visual syllables can lead to a fused percept (e.g., an auditory /ba/ paired with a visual /ga/ often leads to the perception of /da/). The McGurk effect is robust and easily replicated in pooled group data, but there is tremendous variability in the extent to which individual participants are susceptible to it. In some studies, the rate at which individuals report fusion responses ranges from 0% to 100%. Despite its widespread use in the audiovisual speech perception literature, the roots of the wide variability in McGurk susceptibility are largely unknown. This study evaluated whether several perceptual and cognitive traits are related to McGurk susceptibility through correlational analyses and mixed effects modeling. We found that an individual's susceptibility to the McGurk effect was related to their ability to extract place of articulation information from the visual signal (i.e., a more fine-grained analysis of lipreading ability), but not to scores on tasks measuring attentional control, processing speed, working memory capacity, or auditory perceptual gradiency. These results provide support for the claim that a small amount of the variability in susceptibility to the McGurk effect is attributable to lipreading skill. In contrast, cognitive and perceptual abilities that are commonly used predictors in individual differences studies do not appear to underlie susceptibility to the McGurk effect.

Back to poster listing

Poster 136

Fricative/affricate perception measured using auditory brainstem responses
Michael Ryan Henderson (Villanova University)
Joseph Toscano (Villanova University)
Author email: mhende13@villanova.edu; joseph.toscano@villanova.edu

Human listeners use a wide range of acoustic cues for phoneme categorization that must be combined during perception. Animal models suggest that the peripheral auditory system codes information in such a way that certain cues are combined early. For example, silence duration and frication rise time both signal the distinction between /S/ and /tS/. Auditory nerve responses at the onset of frication in these sounds are predicted to be greater for longer silence durations and faster rise times. As a result, certain combinations of acoustically-different stimuli (long silence/slow rise time, short silence/fast rise time) will produce similar responses at the auditory nerve. Thus, these models predict that the cues are combined at the earliest stages of perceptual processing. However, it is unclear whether this occurs in human listeners who use these cues to distinguish speech sounds. Because silence duration is a more reliable cue, we expect listeners to weight it more than rise time, and as a result, human listeners may initially encode these two cues independently of each other. We investigated this issue by recording auditory brainstem responses (ABRs) while listeners heard stimuli varying in silence duration and rise time, corresponding to prototypical /S/, prototypical /tS/, and ambiguous sounds that were acoustically distinct but perceptually similar. Listeners also completed a /S/-/tS/ identification task in which they categorized stimuli varying along the two acoustic dimensions. Results revealed different ABRs for the ambiguous sounds, suggesting that these two cues are not combined at early stages of processing. ABRs were also more distinct for differences in silence duration than for differences in rise time. This result is consistent with listeners' behavioral responses, which showed a greater reliance on silence duration. Overall, results suggest that listeners encode the two cues independently of each other, weighting and integrating them at later stages of processing.

Back to poster listing

Poster 137

Vowel-pitch interactions in the perception of pitch interval size
Frank Russo (Ryerson University)
 Dominique Vuvan (Skidmore College)
Author email: russo@ryerson.ca

Note-to-note changes in brightness are able to influence the perception of interval size. Changes that are congruent with pitch tend to expand interval size, whereas changes that are incongruent tend to contract. In the case of singing, brightness of notes can be affected by many variables including the lyrics, especially vowel content. In a series of experiments, we have been assessing whether note-to-note changes in brightness arising from vowel content influence perception of relative pitch. In Experiment 1, three-note sequences were synthesized so that they varied with regard to the brightness of vowels from note to note. As expected, brightness influenced judgments of interval size. Changes in brightness that were congruent with changes in pitch led to an expansion of perceived interval size. In Experiment 2, the final note of three-note sequences was removed, and participants were asked to make speeded judgments regarding pitch contour. An analysis of response times revealed that brightness of vowels influenced contour judgments. Changes in brightness that were congruent with changes in pitch led to faster response times than did incongruent changes. These findings suggest that the brightness of vowels yields an extra-pitch influence on the perception of relative pitch in song. However, it's also possible that these effects were not driven by spectral centroid but rather some aspect of the vowel articulatory space. In Experiment 3 (currently underway), we have synthesized three-note sequences that vary with regard to their vowel content. In all, we have captured 9 positions across the vowel articulatory space (F1, F2). Acoustic descriptions of vowels that focus on the articulatory space (F1, F2) and the spectral centroid (Fc) will be entered into a regression to assess their relative weight in predicting the perception of interval size.

Back to poster listing

Poster 138

Interpretation of pitch patterns in spoken and sung speech
Shannon Heald (University of Chicago)
Stephen Van Hedger (University of Western Ontario)
Howard Nusbaum (University of Chicago)
Author email: smheald@uchicago.edu

While intonation patterns in speech are typically thought to provide information about talker intent, attitude or emotional state, research has indicated that intonation patterns can also provide descriptive information about the motion of objects or events. Thus, a single intonation pattern (e.g., a rising pitch) might have different interpretations depending on the context in which it is heard. Across two experiments, we explored how participants interpreted intonation patterns through a sentence comprehension task. Participants judged the meaningfulness of sentences that could describe vertical motion of an object (e.g., "The balloon is flying up”) in a speeded task, with the intonation pattern of the sentences either rising or falling in a continuous fashion (Experiment 1) or in a quantized manner meant to emulate singing (Experiment 2). The pattern of response times from both experiments suggested that participants interpreted pitch change as both a marker of vertical motion as well as a marker of talker certainty (i.e., declarative vs. queclarative). The extent to which participants interpreted pitch change as either describing vertical motion or as a marker of talker certainty could be predicted by participant awareness of pitch change, collected during debriefing. Awareness of pitch, however, resulted in opposite patterns across Experiments 1 and 2. Participants who mentioned pitch in Experiment 1 showed a pattern consistent with interpreting pitch change as a queclarative; in contrast, participants who mentioned pitch in Experiment 2 showed a pattern consistent with interpreting pitch as describing vertical motion. Taken together, these results suggest that the interpretation of pitch patterns in language is dynamic and context dependent.

Back to poster listing

Poster 139

How binding words with a musical frame improves their recognition
Agnès Zagala (University of Montréal)
Séverine Samson (University of Lille)
Author email: agnes.zagala@gmail.com

Even though it is widely agreed that music is an excellent memory support for verbal information, the mechanisms underlying this phenomenon remain quite unknown. In this behavioral study, we analyzed the differences between recognition memory of sung and spoken tri-syllabic words. We have set up a protocol including sung or spoken word memorization, distributed into several presentation and memory tests to focus on memorization dynamics. The memory test consists of item recognition – whether a word has been presented or not, and context recognition – whether the modality presentation were sung or spoken. Thirty-nine healthy French participants agreed to take part in the experiment. In a first session, we presented them the first list of 40 words (including 20 spoken words and 20 sung words) to learn, then in a second session, eleven hours later, we presented them the second list of 40 words, then a reactivation of both lists, and finally memory tests. We divided the participants into two groups: half of them had a night sleep during the eleven hours delay while the other half had a regular waking day, in order to evaluate the impact of consolidation delay and sleep on recognition abilities. We could show that item and associated context recognition scores are significantly higher for sung words than spoken words. Furthermore, we could observe a significant improvement of performance for item and context recognition when participants slept between the item presentation and the memory test. These results show that adding associated information such as musical frame to a verbal task can have a significant positive effect on recognition memory, as well as sleep during the learning phase.

Back to poster listing

Poster 140

Exploring Linguistic Rhythm in Light of Poetry: Durational Priming of Metrical Speech in Turkish
Züheyra Tokaç (Bogazici University)
Esra Mungan (Bogazici University)
Author email: zuheyratokac@gmail.com

An imperative of linguistic rhythm research is to study diverse languages with an emphasis on native-speaker experiences to contribute to the variety of existing knowledge. A previous study suggests that temporal priming of linguistic stress via pitch and amplitude changes in rhythmic sequences facilitates phoneme detection latency in subsequent French nonwords (Cason & Schön, 2012). Our replication of this study in Turkish suggests temporally aligned or misaligned trisyllabic nonwords were successfully primed by metrically matching rhythmic sequences but not for pseudowords or bisyllabic nonwords. While these studies, among others, focus on stress as the constituent of linguistic rhythm, both Turkish and French are described as syllable-timed languages (e.g., Nespor, Shukla, & Mehler, 2011). Moreover, Turkish poetry presents a distinct example of poetic meter in which durational rather than stress-based cues induce rhythm. To explore the durational organization of rhythm in Turkish, we revised Cason and Schön's rhythmic primes characterized by pitch and amplitude and employed rhythmic primes characterized by duration and amplitude to prime closed and open syllables. Pilot study results suggest that metrical match and temporal alignment might facilitate phoneme detection latency in nonwords and pseudowords, respectively. Results will be discussed in relation to the rhythm-class hypothesis and music-language interactions.

Back to poster listing

Poster 141

The effects of item and talker variability on the perceptual learning of time-compressed speech following short-term exposure
Karen Banai (University of Haifa)
Hanin Karawani (University of Haifa)
Yizhar Lavner (Tel-Hai College)
Limor Lavie (University of Haifa)
Author email: kbanai@research.haifa.ac.il

Both short-term exposure and systematic training result in perceptual learning of time-compressed speech (TCS), but the similarities and the differences between learning on these two timescales is not well understood. The characteristics of training-induced learning of TCS have been investigated in a series of previous studies in our lab. Together, these studies suggest that training-induced learning is partially specific to the acoustics and semantics of the stimuli used for training. These basic characteristics remain unaltered by either stimulus repetition or talker variability during training. Stimulus repetition may even result in more robust learning. The effects of short-term exposure to TCS have been explored in other labs, suggesting that short-term, rapid, learning is not specific to the semantic characteristics of the briefly-exposed stimuli or to acoustic characteristics such as compression rate. However, due to substantial methodological differences the outcomes of the training studies and the brief exposure studies are not directly comparable. Therefore, we now used the same tasks and stimuli used in our previous training studies to study the effects of item and talker variability on exposure-induced TCS learning. Similar to training-induced learning, exposure-induced learning was also partially specific to the acoustics of the trained stimuli, with no transfer to new items produced by new talkers. This pattern was unaltered by either item repetition or talker variability. These similarities to training-induced learning suggest that the characteristics of perceptual learning may remain stable across different timescales of experience. Added to previous findings on the benefits of spacing training through several sessions, it seems that the added value of longer training lies in providing multiple brief exposures that serve to stabilize the effects of rapid exposure and strengthen them over time.

Back to poster listing

Poster 142

The effect of interlocutor voice context on bilingual lexical decision
Monika Molnar (University of Toronto)
Kai Leung (University of Toronto)
Author email: monika.molnar@utoronto.ca

Bilinguals often communicate with interlocutors who either speak one or both of their languages. Therefore, interlocutor identity is a potential cue for activating the context-appropriate language for bilinguals during spoken language processing. It has been demonstrated that proficient bilinguals' spoken language processing is affected by interlocutor context: bilinguals in a visual-auditory lexical decision task are able to predict the context-appropriate language based on the visual cues of interlocutor context (e.g. Molnar et al., 2015). It has been also demonstrated that bilinguals, as compared to monolinguals, process talker-voice information more efficiently from an early age (e.g., Levi, 2017; Fecher & Johnson, 2019). In the current study we assessed whether bilinguals are able to predict context-appropriate language based on voice information alone. First, in a same-different task, English monolingual and bilingual participants were familiarized with the voices of 4 female speakers who either spoke English (shared language across the monolingual and bilingual participants) or Farsi (unknown to both monolingual and bilingual participants). Then, in a lexical decision task, the participants heard the same 4 voices again, but only speaking in English this time. We predicted that if the participants established a voice-language link in the first part of the task, then their response times should decrease when they hear an "English voice" (as opposed to a "Farsi voice") uttering an English word in the lexical decision task. Our preliminary results suggest that the bilinguals' performance is facilitated by the established voice-language link.

Back to poster listing

Poster 143

Manual directional gestures facilitate speech learning
Anna Zhen (New York University Shanghai)
Stephen Van Hedger (University of Chicago)
Shannon Heald (University of Chicago)
Susan Goldin-Meadow (University of Chicago)
 Xing Tian (New York University Shanghai)
Author email: xing.tian@nyu.edu

Action and perception interact in complex ways to shape how we learn. In the context of language acquisition, for example, hand gestures can facilitate learning novel sound-to-meaning mappings that are critical to successfully understanding a second language. However, the mechanisms by which motor and visual information influence auditory learning are still unclear. We hypothesize that the extent to which cross-modal learning occurs is directly related to the common representational format of perceptual features across motor, visual, and auditory domains (i.e., the extent to which changes in one domain trigger similar changes in another). To the extent that information across modalities can be mapped onto a common representation, training in one domain may lead to learning in another domain. To test this hypothesis, we taught native English speakers Mandarin tones using directional manual gestures. Two types of gestures were included – 1) gestures that were congruent with pitch direction (congruent gesture, e.g., an up gesture moving up, and a down gesture moving down, in the vertical plane), and 2) gestures that were rotated 90 degrees from the congruent gestures (rotated gesture, e.g., an up gesture moving away from the body, and a down gesture moving toward the body, in the horizontal plane). One of four groups (18 participants in each group) either performed or watched one of the two types of gestures. Watching or performing the congruent gestures significantly enhanced tone category learning, compared to auditory-only training. Moreover, when gestures were rotated, performing the gestures resulted in significantly better learning, compared to watching the rotated gestures. Our results suggest that when a common representational mapping can be established between motor and sensory modalities, auditory perceptual learning is likely to be enhanced.

Back to poster listing

Poster 144

Social Judgments about Speaker Confidence and Trust: A Computer Mouse-Tracking Paradigm
Jennifer Roche (Kent State University)
Shae D. Morgan (University of Louisville)
Erica Glynn (Kent State University)
Author email: jroche3@kent.edu; eglynn1@kent.edu

One's ability to express confidence is critical to achieve one's goals in a social context -- such as commanding respect from others, establishing higher social status, and the ability to be persuasive. How individuals express confidence may shape the perceived trustworthiness of their message, but the perception of confidence may also be wrapped up in socio-pragmatic cues produced by the speaker. In the current production/comprehension study, we asked four speakers (2 males/females) to answer trivia questions under three speaking contexts: naturally, overconfident, and lack of confidence. An evaluation of the speakers acoustics indicated that speakers significantly varied their acoustic cues (e.g., rising intonation) as a function of gender and/or speaking context. The speaker's answers to the trivia questions in the three contexts (natural, overconfident, lack of confidence) were then presented to listeners (N = 26) in a social judgement task. We examined cognitive and decision-making processes associated with listeners' social judgments of male/female vocal confidence using a mouse-tracking paradigm. Listeners were sensitive to the speakers' acoustic modulation, as pitch range and rising intonation were both significant predictors of the social judgments listeners made about speaker confidence. Results also indicated a significant positive relationship between listeners judgments of speaker confidence and trustworthiness, but there was no difference in trustworthiness ratings as a function of speaker gender. However, listeners rated the female speakers as less confident than the male speakers. Listeners also exhibited more cognitive competition (area under the curve) and thinking (response time) when rating the female speakers relative to the male speakers. Though there was a relationship between trust and confidence, only the perception of the vocal cues to confidence by gender were differentially processed by listeners. We consider, then, how listeners social judgments about trust and confidence were impacted by pre-existing stereotypes from social, heuristic-based processes.

Back to poster listing

Poster 145

Women are Better than Men in Detecting Vocal Sex Ratios
John Neuhoff (The College of Wooster)
Author email: jneuhoff@wooster.edu

Unbalanced sex ratios occur when one sex significantly outnumbers the other and have been shown to influence mating and courtship behaviors in humans and many other species. The majority sex faces more competition for mates and more willingly conforms to the preferences of the minority sex. However, the question of how observers perceive sex ratios has received little attention in the literature. Here, we presented listeners with samples of five simultaneous male and female voices that varied in the number of male and female talkers. We asked participants to estimate the sex ratios in brief audio clips by using a slider presented on a computer screen. We found that listeners could do the task accurately and that the significant main effect for actual sex ratio was qualified by two significant interactions. There was an interaction between actual sex ratio and participant sex that showed that males heard more male voices than females heard only when female voices were plentiful. Additionally, females heard more male voices than males heard only when male voices were plentiful. We also found a main effect for error rates indicating that women performed the task more accurately than men and that longer listening times resulted in better performance.

Back to poster listing

Audition and Emotion (146-151)

Poster 146

Impact of Affective Vocal Tone on Social Judgments of Friendliness and Political Ideology
Erica Glynn (Kent State University)
Thimberley Morgan (Kent State University)
Rachel Whitten (Kent State University)
Madison Knodell (Kent State University)
 Jennifer Roche (Kent State University)
Author email: eglynn1@kent.edu

Misattributions about emotionally-valenced vocal cues interacting with lexical and semantic content may be implicated in the misinterpretations of socio-pragmatic intent. Currently, it is less clear how tone of voice may impact and shape decisions made in social contexts when the tone competes with a listener's personal beliefs. In the current study, we use a computer mouse-tracking paradigm to evaluate decision making metrics about a social other, when that social other is described as behaving in a politically motivated way (e.g., Brandon has a pro-choice bumper sticker on his car.) in a positive, negative, and/or neutral tone of voice. An initial evaluation of the data indicated that most participants reported being moderate in their political views, but the majority of participants tended to agree more with the individuals behaving more liberally. Additionally, participants tended to rate those who were described as behaving liberally as more friendly. This also seemed to impact action dynamics, as listeners exhibited more hesitation (x-flips) and cognitive pull (area under the curve) when the vocal tone did not match the participant's political ideology (e.g., negative tone paired with a liberal statement). Moreover, when the listener exhibited more cognitive pull, they also tended to rate the person behaving politically as significantly less friendly. This suggests that positive social judgments of friendliness result from aligned ideologies/perspectives, while exerting more cognitive effort may lead to more negative social judgments if there is a conflict between ideology and vocal cues. Therefore, it would seem that the interaction between political ideology and tone of voice, together, has the potential to impact and shape the decisions listeners make in a social context.

Back to poster listing

Poster 147

Emotive Attributes of Complaining Speech
Maël Mauchand (McGill University)
 Marc Pell (McGill University)
Author email: mael.mauchand@mail.mcgill.ca

Complaining is a form of emotive speech, in which a speaker attempts to gain empathy from their interlocutor by intentionally displaying increased affect in their voice. Except for a general sense of expressivity, little is known about the acoustic and perceptual attributes of complaints and how they relate to genuine emotions. To investigate on the acoustic profile of complaints, 4 French and 4 Québécois (Canadian French) speakers uttered a total of 320 short sentences in a complaining or neutral manner. Fourteen acoustic measures were extracted from each utterance and compared between complaints and neutral utterances within each group. Additionally, 20 French and 20 Québécois listeners rated a subset of the stimuli on 6 scales representing basic emotions (happiness, sadness, anger, surprise, fear, and disgust). Complaints were found to exhibit a higher mean pitch and larger pitch variability, as well as increased energy at high frequencies, markers of high arousal emotions like anger and surprise, but also resembling vocalizations of pain. Several acoustic markers of sadness were also highlighted. These acoustic features were paralleled by high emotional ratings of anger, sadness, and surprise by listeners. Yet, several markers of voice control, voluntary exaggerations and emphatic rhythm modulations highlighted the intentional aspect of these affective cues, marking the difference between genuine emotion and emotive speech like complaints. In addition, cultural differences were noted: Québécois speakers emphasized anger-related cues and showed more expressivity, whereas French speakers displayed more sadness and showed more regularity and voice control. As a result, Québécois complaints were rated very high on anger, surprise and disgust, while French complaints received higher ratings of sadness. These group-based differences reveal a social dimension of complaints that go beyond emotions and depend on a speaker's socio-cultural identity.

Back to poster listing

Poster 148

Effects of vocally-expressed emotions on visual scanning of faces: Evidence from Mandarin Chinese
Shuyi Zhang (McGill University)
Pan Liu (McGill University; Western University)
Simon Rigoulot (McGill University; Université du Québec à Trois-Rivières)
Xiaoming Jiang (McGill University; Tongji University)
Marc Pell (McGill University)
Author email: shuyi.zhang@mail.mcgill.ca

In interpersonal conversations, people integrate and process information from multiple sources, including facial and vocal expressions, to understand another person's emotions and social intentions. Previous studies of English-speaking adults demonstrate that vocally-expressed emotions implicitly guide eye movement patterns to congruent facial expressions in a visual array; here we examined the influence of voice information on visual attention in a culturally-distinct group, Mandarin-speaking Chinese, to test whether they exhibit differences in context sensitivity to faces during the same task. Eye movements of 24 native Mandarin participants were recorded as they heard emotionally-inflected Chinese pseudo-utterances and scanned an array of four faces portraying fear, anger, happiness, and neutrality (one face was emotionally congruent with the voice). The duration and frequency of fixations to each face were analyzed in two time windows: during simultaneous presentation of the emotional voice (early time window); and after the offset of the voice (late time window). Our results show that during the late time window, Chinese participants looked significantly longer at faces that expressed congruent vs. incongruent vocal emotions, suggesting a cross-channel emotion congruency effect. However, during both early and late time windows, Chinese participants looked more frequently at incongruent face-voice pairings, in contrast with our previous study of English-speaking participants who looked both longer and more frequently at congruent pairings (Rigoulot & Pell, 2012). Our results suggest that processing multisensory emotional cues is mediated by culturally-learned behaviors and that Chinese adults display greater context sensitivity to social environments.

Back to poster listing

Poster 149

Perception of emotion in West African indigenous music
Cecilia Durojaye (Max Planck Institute for Empirical Aesthetics)
Author email: cecilia.durojaye@ae.mpg.de

Empirical research in music psychology has shown that many forms of music can express extra-musical phenomena such as emotions. While the aspect of emotion has received much scholarly attention in Western cultural contexts, there is a dearth of research on the multitude of cultures that are not Western Educated Industrialized Rich and Democratic (WEIRD). This study explores the expression and perception of emotion in a non-WEIRD culture. The work examines a type of Western African indigenous music known as 'dundun music' or singing drum (mostly associated with the Yoruba people of Nigeria). Four groups of dundun musicians were asked to perform short 1-2 minutes of music to express five basic emotions: Happiness, Sadness, Fear, Anger, and a Yoruba emotion for which there is no direct English word, but is to be truly moved. From these, the best exemplar representing each emotion was chosen to use as experimental stimuli. Thirty participants from the same culture listened to the five exemplar stimuli and judged which emotions they perceived to be communicated by the music. Data were collected both by a questionnaire (containing forced-choice and open-ended responses), and through follow-up interviews. The results confirm that listeners generally favored the emotions intended by the performers, but frequently also perceived other emotions in the same musical piece. Analyses of the data support that listener perception of emotion is dependent on both universal factors like beat rate and dynamics, and specific cultural factors like shared language. Findings indicate that emotions are reliably communicated and perceived in non-WEIRD cultures following their own social context. The findings are consistent with earlier research emphasizing the role of the social context and the perceiver in emotion perception, as well as there being reliable communication of emotions in music performance.

Back to poster listing

Poster 150

Timbral, temporal, and expressive shaping of musical material and their respective roles in musical communication of affect
Kit Soden (McGill University)
Jordana Saks (McGill University)
Stephen McAdams (McGill University)
Author email: kit.soden@mcgill.ca

How does the listener perceive the expressive intent of the performer? Can the performer reshape or recompose the affective information composed into the melodic material of a dramatic work? How is the intended expression influenced or affected by the range of timbral variation available on a given instrument? Our study seeks to understand the role of timbre and time in the musical communication process. We involved composer, performer, and listener by abstracting melodic lines from their original context within operatic works, and then reinterpreting these excerpts using different instrumental timbres, with varying tempos and expressive affects. Three short excerpts from Romantic-era operas were chosen, based on their clarity in signifying one of five basic discrete emotions (anger, fear, happiness, love, sadness). For the reinterpretations, soprano voice and five orchestral instruments were chosen to include a wide variety of timbral and expressive characteristics. Six professional musicians performed the excerpts while attempting to express one of each of these five basic emotions during their performance, yielding a total of 90 excerpts. In four separate experiments, participants were asked to label or sort these excerpts based on emotion categories or timbral/temporal qualities. In Experiment 1, listeners selected the appropriate emotion descriptors and results showed that although there was a communication of affective information, the musical communication process depended firstly on the excerpt, secondly on the instrument and/or instrumentalist, and finally on the intended emotion. Experiment 2 showed that when listeners match intended emotions for a given instrument and excerpt, the composed intention had next to no effect, and that listeners can easily match the emotion categories. Finally, when matching the expressed emotion of excerpts of a given instrument to those of voice (Exp. 3) or clarinet (Exp. 4), affect in instrumental timbre is more readily perceived than affect in an operatic vocal timbre.

Back to poster listing

Poster 151

Emotional responses to acoustic features: Comparison of three sounds produced by Ram's horn (Shofar)
Leah Fostick (Ariel University)
Maayan Cytrin (Ariel University)
Eshkar Yadgar (Ariel University)
Howard Moskowitz (Mind Genomics Associates, Inc.)
Harvey Babkoff (Bar-Ilan University)
Author email: leah.fostick@ariel.ac.il

Sounds convey emotional information to the listener, mostly in speech and music. Other sounds, however, are also meant to induce a certain emotional state. Sounds produced by Ram's horn (shofar) were used throughout history for both secular and religious purposes. Today, they are part of Jewish religious synagogue services. The shofar sounds differ along one major parameter, continuous sound (Tekiah) versus two versions of interrupted sound: three short sounds (Shevarim) and nine very short sounds (Teruah). Acoustically, these three sounds differ in their temporal structure, but are similar in pitch, intensity, and formants 1 to 3. The purpose of the current study was to test the emotional responses to the three shofar sounds and see whether they conform with predictions based on religious and acoustical literature. Forty undergraduate students listened to the three shofar sounds and rated them by emotional descriptors, using the Mind Genomics method. Participants rated each sound with 24 different combinations of 3 to 4 descriptors, chosen randomly out of a pool of 16. Each sound was tested separately, with 10 minutes of task-distraction separating them. The results generally conform with the emotional responses reported in the religious and acoustical literature. Different emotional effects were reported for the three different sounds. All of the sounds were associated with "arousal", while only the continuous sound (Tekiah) was associated with the feeling: "full of awe". The Teruah was described as "joyful", "overwhelmed", and "jittery" while the Shevarim was described by fear-associated descriptors ("afraid", "anxious", and "heavy in my chest"). Men used "arousal", "hopeful", and "full of awe" more frequently, while women used "afraid", "anxious", "heavy in my chest", and "excited" more frequently. We suggest that the temporal structure of the three shofar sounds is involved in conveying different emotions.

Back to poster listing

Program creator: Laura Getz (lgetz@sandiego.edu)