Chris Chafe is a composer, improvisor, and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). In 2019, he was International Visiting Research Scholar at the Peter Wall Institute for Advanced Studies The University of British Columbia, Visiting Professor at the Politecnico di Torino, and Edgard-Varèse Guest Professor at the Technical University of Berlin. At IRCAM (Paris) and The Banff Centre (Alberta), he has pursued methods for digital synthesis, music performance and real-time internet collaboration. During the pandemic he’s released an album, “Time Crystal” on Ravello Records, performed over 60 concerts online and been a contributor to a large volunteer effort for improvements to network music performance. At CCRMA he is involved in research into wavefield synthesis for physical models and learning from his co-workers about deep machine learning networks for music prediction and how quantum computing technologies can be introduced into music making. (https://chrischafe.net/)
Unlocking Musical Performances During the Lockdowns: The presentation will feature performances from the year of COVID-19 quarantines largely focused on how traditional ensembles could be reconstituted online from home. New technical work and new discoveries about the capabilities of today’s network and computing infrastructure have happened with contributions from volunteer code contributors companies, a new foundation, and several computer music research centers. Above all improvements have been driven by the musicians who have taken part. The pandemic has ushered in a new phase of development driven by musicians seeking solutions, particularly ease of use and the ability to scale across worldwide cloud infrastructure. With orchestral-sized ensembles urgently in need of ways to rehearse on the network and most participants running their systems over commodity connections, this “new reality” runs counter to what’s required for ultra-low-latency rhythmic synchronization. JackTrip which has generally been run as a native software application is now complemented by dedicated solutions including numerous Raspberry Pi-based systems, standalone physical web devices, and browser-based WebRTC and Pure Data versions. I conclude with some thoughts about how in our physical realms we’re creatures who listen and function with inherent delays and Internet Acoustics is a new realm into which we’re expanding. Pre-COVID, that was more on the level of a thought experiment and now we’re accelerating towards it.
Emilia Gómez is Lead Scientist of the HUMAINT project that studies the impact of Artificial Intelligence on human behaviour, carried out at the Joint Research Centre, European Commission. She is also a Guest Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona, where she leads the MIR (Music Information Research) lab of the Music Technology Group and coordinates the TROMPA (Towards Richer Online Music Public-domain Archives) H2020 project.
Emilia Gómez’s work has been involved in the Sound and Music Computing Network for many years, contributing in several roles such as author, reviewer and board member. She has also been serving the ISMIR community, being the first woman president of the International Society for Music Information Retrieval. She is particularly interested in improving gender and cultural diversity of our research field. (https://www.emiliagomez.com)
TROMPA: Towards Richer Online Music Public-domain Archives: In this talk, I will present the main approach and outcomes of the TROMPA Horizon 2020 European project, which I have coordinated in the last years with researchers on the use of machine and human intelligence for the enrichment of classical music archives. Classical music, although a historical genre, it is continually (re)interpreted and revitalised through musical performance. TROMPA intends to enrich and democratise publicly available classical music archives through a user-centred co-creation setup. For analysing and linking music data at scale, the project employs and improves state-of-the-art technology. Music-loving citizens then cooperate with the technology, giving feedback on algorithmic results, and annotating the data according to their personal expertise. Following an open innovation philosophy, all knowledge derived is released to the community in reusable ways. This enables many uses in applications which directly benefit crowd contributors and further audiences. TROMPA demonstrates this for music scholars, orchestras, piano players, choir singers, and music enthusiasts.
Scot Gresham-Lancaster is a composer, performer, instrument builder, and educator. He is a Research Scientist with the startup StrangeData LLC and Visiting Researcher at CNMAT UC Berkeley. The focus of his research is in the sonification of data sets in tight relationships with visualizations, (multimodal representations). As a member of the HUB, he is an early pioneer of networked computer music and has developed many “cellphone operas”. He has created a series of co-located international Internet performances and worked developing audio for several games and interactive products. He is an expert in educational technology. (https://scot.greshamlancaster.com/)
Computer Network Music – an examination of the roots of a new genre of computer music: A new genre of music practice where interactions of networks of personal computers generate note and sound choices is described. It is the speakers feeling that this approach grew directly out of cultural sense of a collective technological utopia. This approach was realized by the availability of personal computer technology and networking. Initially practiced by a community of electroacoustic composer/performers from the San Francisco Bay Area circa 1978, it spread to become encompassed in practices of many laptop composers in a variety of ways. There is an important distinction to between work made between heterogeneous collectives starting with the League of Automatic Music Composers and homogeneous “Laptop Orchestras”.
Michele Ducceschi currently serves as Principal Investigator for the European Research Council (ERC) Starting Grant NEMUS. This is a 5-year project aiming at synthesising the sound of historical musical instruments, that are currently out of playing condition. Previously, he was a Leverhulme Early Career Fellow (2017) at the Acoustics and Audio Group at the University of Edinburgh, Scotland. He was also a Royal Society Newton International Fellow (2015) and part of the NESS project. His research deals primarily with the sound synthesis of acoustic instruments by physical modelling. He is particularly interested in the efficient simulation of nonlinear systems, either lumped or distributed. He is also interested in mechanical reverberation. (https://mdphys.org/)
Real-time, large scale physical modelling sound synthesis: Physical modelling sound synthesis has long roots. In fact, the first ever example of a partial differential equation dealt with the musical problem of the vibrating string, which puzzled the minds of the most renowned scientists of the mid-1800s. The ideas that came into existence during this “vibrating string controversy” established the foundation of early physical modelling synthesis techniques, from digital waveguides to modal methods. Today, mainstream numerical methods can be employed to solve complex mathematical equations using just a fraction of the available CPU. But things are not straightforward, and considerable effort is spent in the design of suitable integration algorithms. In this talk, a coarse review of the leading ideas in physical modelling sound synthesis will be given. In the second part, illustrative examples of typical objects (oscillators, strings and plates) will be shown. Finally, demos of advanced physical models (a plate reverb, and a spring-bar network called Derailer) will be played. The demos are freely available for download at www.physicalaudio.co.uk