Millsaps College
Music Department
Linguistic processing, especially syntactic processing, is often considered a hallmark of human cognition; thus, the domain specificity or domain generality of syntactic processing has attracted considerable debate. The present... more
Linguistic processing, especially syntactic processing, is often considered a hallmark of human cognition; thus, the domain specificity or domain generality of syntactic processing has attracted considerable debate. The present experiments address this issue by simultaneously manipulating syntactic processing demands in language and music. Participants performed self-paced reading of garden path sentences, in which structurally unexpected words cause temporary syntactic processing difficulty. A musical chord accompanied each sentence segment, with the resulting sequence forming a coherent chord progression. When structurally unexpected words were paired with harmonically unexpected chords, participants showed substantially enhanced garden path effects. No such interaction was observed when the critical words violated semantic expectancy or when the critical chords violated timbral expectancy. These results support a prediction of the shared syntactic integration resource hypothesis (Patel, 2003), which suggests that music and language draw on a common pool of limited processing resources for integrating incoming elements into syntactic structures. Notations of the stimuli from this study may be downloaded from pbr.psychonomic-journals.org/content/supplemental.
The extent to which syntactic processing relies on special-purpose cognitive modules has attracted considerable debate. The current experiments address this issue by simultaneously manipulating syntactic processing demands in language and... more
The extent to which syntactic processing relies on special-purpose cognitive modules has attracted considerable debate. The current experiments address this issue by simultaneously manipulating syntactic processing demands in language and in music. Participants performed self-paced reading of garden-path sentences in which a structurally unexpected word caused temporary syntactic processing difficulty. As participants read, each button press triggered a musical chord, with the resulting sequence forming a coherent Bach-style chord progression. When a harmonically unexpected chord was paired with a structurally unexpected word, participants showed substantially enhanced garden-path effects (as measured by reading times), suggesting that language and music were competing for similar processing resources. No such interaction was observed when the critical word violated semantic, rather than syntactic, expectancy, nor when the critical chord violated timbral, rather than harmonic, expectancy. These results support a prediction of the shared syntactic integration resource hypothesis (SSIRH, Patel, 2003), which suggests that music and language draw on a common pool of limited processing resources for integrating incoming elements (such as words and chords) into syntactic structures.
- by Jason Rosenberg and +1
- •
For over half a century, musicologists and linguists have suggested that the prosody of a culture’s native language is reflected in the rhythms and melodies of its instrumental music. Testing this idea requires quantitative methods for... more
For over half a century, musicologists and linguists have suggested that the prosody of a culture’s native language is reflected in the rhythms and melodies of its instrumental music. Testing this idea requires quantitative methods for comparing musical and spoken rhythm and melody. This study applies such methods to the speech and music of England and France. The results reveal that music reflects patterns of durational contrast between successive vowels in spoken sentences, as well as patterns of pitch interval variability in speech. The methods presented here are suitable for studying speech-music relations in a broad range of cultures.
Authors: Allison R. Fogel, Jason C. Rosenberg, Frank Lehman, Gina R. Kuperberg, and Aniruddh D. Patel Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is... more
Authors: Allison R. Fogel, Jason C. Rosenberg, Frank Lehman, Gina R. Kuperberg, and Aniruddh D. Patel
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence within the melody. Each such ‘authentic cadence’ (AC) melody was matched to a ‘non-cadential’ (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence within the melody. Each such ‘authentic cadence’ (AC) melody was matched to a ‘non-cadential’ (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.
- by Frank Lehman and +1
- •
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms... more
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5–9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.
If the immediacy of spatial relationships in music is inversely proportional to the primacy of directionality, the metaphor of perceptual space in compositional practice is perhaps most relevant and useful when temporal linearity within... more
If the immediacy of spatial relationships in music is inversely proportional to the primacy of directionality, the metaphor of perceptual space in compositional practice is perhaps most relevant and useful when temporal linearity within and between musical gestures is negated. Katharina Rosenberger’s PERIPHER, at least at times, exemplifies this, negating goal-orientated narrative relationships in favor of affine geometrical paradoxes. As the name of the piece implies, PERIPHER’s musical material is nebulous but concentrated, lying on the boundary between contradictory perceptual spaces. This will be illustrated using four sound examples taken from the premiere, which was performed by l’Orchestre de Chambre de Genèva in La Chaux-de-Fonds, Switzerland.
63.0 million researchers use this site every month. Ads help cover our server costs.