Music Research Overview
selected Projects
A few of my favorite music research projects can be found here! From pharmacological studies to corpus studies, and from behavioral studies to music analysis, the study of how humans connect to music requires me to wear many hats, but I wouldn’t have it any other way.
Scroll through the Descriptions below
When characterizing nominally “sad” music, listeners appear to offer a wide range of descriptions. Could it be that this large variance in responses is a consequence of the failure to distinguish melancholy from grief? The current study addresses this distinction by examining listeners’ perceptions of “sad” music in a series of five studies. The results are consistent with the idea that different musical parameters can be identified in melancholy and grief music, that listeners can distinguish musical grief from musical melancholy, and that these two stimuli types give rise to different emotions. These studies have implications for refining the umbrella concept of “sadness” in music research—music that has previously been called “sad” truly consists of more than one emotional state.
During the first lockdown period of the COVID-19 pandemic (April-May 2020), we surveyed changes in music listening and making behaviors of over 5000 people. We found that people experiencing increased negative emotions used music for solitary emotional regulation, whereas people experiencing increased positive emotions used music as a proxy for social interaction. Light gradient-boosted regressor models were used to identify the most important predictors of an individual’s use of music to cope, the foremost of which was, intriguingly, their interest in the novel genre of “coronamusic.” Overall, our results emphasize the importance of real-time musical responses to societal crises, as well as individually tailored adaptations in musical behaviors to meet socio-emotional needs.
The PUMS is an online and publicly-available database where researchers can find a list of 22,417 musical stimuli that have been previously used in the literature on how music can convey or evoke emotions in listeners. Each musical stimulus used in these studies was coded according to various criteria, such as its designated emotion, how it was selected, its length, whether it was used in a study of perceived or induced emotion, and its style or genre. The database offers insight into how music has been used in psychological studies over a period of 90 years and provides a resource for scholars wishing to use music in future behavioral or psychophysical research.
A recent review of over 22,000 musical stimuli has shown that research has only focused on 9 emotional terms and that these emotional terms are used inconsistently across researchers. The variable use of emotional terminology could indicate that researchers have unintentionally conflated multiple emotional states. In this paper, I describe how researchers and participants can be trained to utilize more specificity when describing emotions and can further learn to better differentiate emotions they feel and perceive in music listening. In considering the “future directions” of music and emotion research, it will be important to utilize methodology consistent with emotional granularity in order to discover (potentially) many more than 9 emotions that can be expressed and elicited by music.
The current study examines possible effects of acetaminophen on both perceived and felt responses to emotionally-charged sound stimuli. Additionally, it tests whether the effects of acetaminophen are specific for particular emotions (e.g., sadness, fear) or whether acetaminophen blunts emotional responses in general. Finally, the study tests whether acetaminophen has similar or differential effects on three categories of sound: music, speech, and natural sounds. The experiment employs a randomized, double-blind, parallel-group, placebo-controlled design. In light of the fact that some 50 million Americans take acetaminophen each week, the blunting effect of acetaminophen suggests that future studies in music and emotion might consider controlling for the pharmacological state of participants.
The idea of Durchbruch—passages of “breakthrough”—has both intrigued and perplexed scholars in the last three decades. My analysis of three Durchbruch moments focuses on musical parameters such as texture, dynamics, and orchestration. In addition to presenting a musical analysis of Durchbruch passages, I show how two recent psychological theories—the Suppressed-Fear Theory (Huron 2006) and the Hive-Switch Theory (Haidt 2012)—can be used to explain why Durchbruch compositional strategies give rise to feelings of transcendence. I demonstrate that the Mahler Durchbruch passages are intimately related to the success or failure of the sonata form and connect extramusical ideas across movements of a symphony. Powerful moments of music may have structural features consistent with those that lead to musical transcendence, but they can only be considered to be moments of Durchbruch if they include repercussions for the movement as a whole.
Previous research has suggested that grief may act as an overt, social emotion, while fear and melancholy may act as covert, self-directed emotions. That is, grief may function to solicit assistance from others, whereas fear and melancholy may function to improve one’s own situational prospects. In three studies, we investigate how viewers perceive emotions in dance, music, and multimodal (dance and music) performances. In the first study, we code the amount that dancers touched each other during responses to each prompt. In the second study, we test the idea that viewers perceive more sociality among the dancers in grief prompts than in melancholy and fear prompts. Finally, we perform a content analysis of interviews with the dancers, which may suggest that they were intending to be more prosocial while expressing grief as compared to melancholy and fear.
Audiences have marveled at how Billie Eilish’s music employs techniques of Autonomous Sensory Meridian Response (ASMR), such as close miking, whispering, and the use of dentist drills and Easy-Bake ovens. In this study, participants answered questions about their emotional responses to Eilish’s music and recordings from ASMRtists. Given the popularity of ASMR among Millennials and Gen Z—a single ASMR YouTube channel has over 1.8 million subscribers—the investigation of these techniques in popular music could provide insight into how Billie’s music is being used to evoke emotions in listeners across the world, especially during the global coronavirus crisis.
Certain microphone techniques are thought to have contributed to the popularity of radio crooners in the late 1920s. In this project, I explore how the same techniques that contributed to Bing Crosby’s status as America’s crooner may have also helped cement Billie Eilish as a popular artist in current American culture. In response to some “sonically intimate” sounds, certain listeners feel the same sense of relaxation and pleasure usually associated with being near a close friend. The popularity of recordings that use these techniques, including the music of Billie Eilish and Bing Crosby, may be explained in part by the need for new methods of physical proximity after the rise of social media and quarantine measures during the worldwide pandemic.
Distinguishing interpretational styles of a musical work is an important step in the field of musical performance analysis. This study investigates similarities and differences across performances of the Aria and Aria da Capo in J.S. Bach’s Goldberg Variations. Supervised machine learning techniques were used to assess classification accuracy of the performer of the Aria da Capo from the Aria training set, as well as to discern relative feature importance. Using statements by performers and theorists as a guideline, the musical score was analyzed in order to select specific performance features that were likely to contribute to the overall musical character and emotional impact of the twin arias.
Musical themes offer some of the most memorable parts of a musical work and have been a subject of interest in the study of music theory for centuries. A corpus study of 1000+ musical themes showed that first themes are more likely to be strong or energetic than are second themes. The results of a perceptual study are consistent with the idea that listeners can differentiate first and second themes using surface level features of the musical stimuli. This research provides insight into which musical and acoustic factors are perceived by musicians in both visual and auditory settings.
We examined the possibility that the interaction between the stressfulness of the music and a listener's capacity for handling stress contributes to that listener's musical preferences. The key prediction relating fitness to musical preference is that the stressfulness of the music should tend to reflect the person's capacity for handling stress, including his or her physical fitness. The study method made use of an online questionnaire to assess physical fitness, impulsivity and sensation-seeking tendencies, and musical preferences. To create an independent index for estimating musical stressfulness, a parallel study was conducted, where an independent group of judges assessed the stressfulness of the music identified by participants in the main study. The results are consistent with the idea that males, younger participants, people with fewer years of education, and those who are more physically fit tend to prefer more stressful music.