Click Here to Book a Place via Eventbrite (and Receive a Zoom Invite)

Chris Rhodes (University of Manchester)

Abstract

In recent years, new wearable sensor technologies (termed gestural interfaces) have allowed music practitioners/researchers to access novel forms of biometric data, for use in music composition. Such gestural interfaces measure biometric data from the body, such as electroencephalographic (EEG) and electromyographic (EMG) data. Biometric datasets are beneficial for use within interactive music composition, because they allow us to engage with digital systems in novel ways, via human gestural control; for example, in VR and game engines. However, certain biometric datasets allow us more interactive control than others.

EMG data allows us greater control over interactive music systems, compared to EEG, because of the nature of the interface. However, EMG data is also complex and difficult to understand. Due to this complexity, modern machine learning (ML) methods must be used to process and predict EMG signals. Using ML, human computer interaction (HCI) can become heightened, allowing us to control interactive music systems through nuanced physical interactions. As game engines have arbitrary physical laws, they are ideal for exploring the use of EMG data when interacting with digital objects, as well as the resulting sonic consequences. In turn, mechanical instrument design also becomes less restricted. Through navigating a series of compositions by the author, this presentation explores how EMG data can be used to make novel interactions within game engines and stimulate interactive music compositions. The pieces use Myo armbands to collect EMG information from performers, apply ML algorithms to biometric signals via Wekinator, and make musical interactions within game engines. Ultimately, this presentation seeks to ask: How can digital musical instruments be created in game engines through EMG integration? What kinds of interactions will be made possible, within the virtual space, via EMG data and gestural interfaces? How does ML affect our ability to make musical HCIs with EMG data?