A team of music technology students developed a system to enable novices to create remixes of any 4 Spotify songs they want. The system uses source separation to break down the songs into different components, such as vocals, drums, bass. Users can then generate 32 bar compositions that randomize how the song is constructed. The goal is to better understand users' preferences for control vs. automation of their mashups.
The Robotic Musicianship group aims to facilitate meaningful musical interactions between humans and machines, leading to novel musical experiences and outcomes. In our research, we combine computational modeling approaches for perception, interaction, and improvisation, with novel approaches for generating acoustic responses in physical and visual manners.
The motivation for this work is based on the hypothesis that real-time collaboration between human and robotic players can capitalize on the combination of their unique strengths to produce new and compelling music. Our goal is to combine human qualities, such as musical expression and emotions, with robotic traits, such as powerful processing, the ability to perform sophisticated mathematical transformations, robust long-term memory, and the capacity to play accurately without practice.