Sunday, August 19 / 19:00h
As part of the Social Acoustics series, moderated by Brandon LaBelle.
This talk will explore biosensing musical interfaces (such as the BioMuse, developed by Knapp and Lusted 1988; and Xth Sense wearable biophysical technology, developed by Marco Donnarumma 2010-2014) together with histories of electrotactile and vibrotactile communication, speech recognition and synthesis, and voice and touch-driven digital interfaces. Biosensing musical interfaces are interactive and kinetic systems. They comprise sensors that detect performer’s physical gestures and bioelectrical signals, hardware, and software, which amplifies, filters, and digitizes the signals and translates them into audio. With this procedure, biosensing musical interfaces do not simply analyze, control or amplify but they also interactively shape bodily gestures, signals and sounds. The interface here can be considered the mediation of the signals, a process one that is both physical and abstract.
Tracing the seminal works — Music for Solo Performer (Alvin Lucier, 1965), Spacecraft (Musica Elettronica Viva, 1967), and Ecology of the Skin (David Rosenboom, 1970) — composers, performers and music technologists engaged in biosensor and gesture-based interactive performances — such as Atau Tanaka, Biomuse Trio (Ben Knapp, Eric Lyon, Gascia Ouzounian), Laetitia Sonami, Miguel Ortiz, Marco Donnarumma, Baptiste Caramiaux, Rebecca Fiebrink, Frédéric Bevilacqua, Pavlos Antoniadis, Tod Machover, Teresa Marrin-Nakra, Rosalind Picard and Jaime Oliver La Rosa— have examined biosensing musical interfaces drawing on human-computer interaction, digital musical instruments, expressive capacity of embodied gesture, quantitative and qualitative analysis of emotion, notation, affective computing, as well as algorithmic listening, new forms of interactive and collaborative music making, co-agency of machine and human learning, and music, health and wellbeing. Following this lineage, I wish to look at biosensing musical interfaces in a both related and different way.
First, referring to histories of electrotactile and vibrotactile communication and speech recognition and synthesis, I will suggest that biosensing musical interfaces are both touch-driven and voice and speech-driven technologies. Building on this suggestion and examples of biosensor performances, I will consider voice and touch-driven technologies intertwined. Second and last, I will discuss the convergences and divergences between contemporary communication and speech technologies developed for deaf and hard of hearing people (such as the signing gloves that translate sign language into text or/and automated speech) and biosensing technologies developed for gesture-based interactive performances. Despite their different contexts and applications, these two technologies demonstrate some technical similarities in their foundations. However, they have different protocols and implications. The former is invested in so-called “efficient and functional” communication. The latter does not necessarily attempt to transmit or signify a verbal message in particular. It explores expressivity at the core of intentional variation and control of gesture. But what the gesture and the corresponding sound express remains uncertain, interruptive, and evocative. Looking at the speech and biosensing technologies together, I wish to suggest that the expressivity in biosensor performances prompts a different case of tactile speech and voice, one that is not limited to verbal language, human body or vocal cords.
Tactile Speech is a project that Bulut currently develops at the Max Planck Research Group “Epistemes of Modern Acoustics,” led by Prof. Dr. Viktoria Tkaczyk at the Max Planck Institute for the History of Science. The study is part of Bulut’s first monograph, Building a Voice: Sound, Surface, Skin.
For more information about Tactile Speech: https://www.mpiwg-berlin.mpg.de/research/projects/tactile-speech
Zeynep Bulut’s research sits at the intersection of voice and sound studies, experimental music, and sound art. Her book project, Building a Voice: Sound, Surface, Skin, theorizes the emergence, embodiment, and mediation of voice as skin. Her articles have appeared in various volumes and journals including Perspectives of New Music, Postmodern Culture, and Music and Politics. Bulut is a lecturer in music at Queen’s University Belfast, visiting research fellow at King’s College London, and visiting researcher at the Max Planck Research Group “Epistemes of Modern Acoustics.” Prior to joining Queen’s, she was an early career lecturer in music at King’s College London and a research fellow at the ICI Berlin Institute for Cultural Inquiry. She received her PhD in Critical Studies/Experimental Practices in Music from the University of California at San Diego. She is sound review editor for Sound Studies: An Interdisciplinary Journal, and project lead for the collaborative research initiative “Map A Voice.”