Research

Unsupervised Text-to-Sound Mapping via Embedding Space Alignment

Luke Dzwonczyk & Carmine-Emanuele Cella

Proceedings of the 28th International Conference on Digital Audio Effects (DAFx) • 2025 • Ancona, Italy

This paper presents a new artistic tool capable of mapping words to sounds from a user provided sound corpus.

An Overview on CNMAT Technologies and Future Directions

Carmine-Emanuele Cella, Edmund Campion, Jeremy Wagner, Luke Dzwonczyk, & Jon Kulpa

Proceedings of the 50th International Computer Music Conference • 2025 • Boston, MA

Overview of technologies and future research directions at the Center for New Music and Audio Technologies (CNMAT).

Generating Music Reactive Videos by Applying Network Bending to Stable Diffusion

Luke Dzwonczyk, Carmine-Emanuele Cella, & David Ban

Journal of the Audio Engineering Society • Vol. 73, No. 6 • 2025

This paper presents a novel approach to generating music-reactive videos by applying network bending techniques to Stable Diffusion models, creating synchronized visuals that respond dynamically to audio input. This is an extended version of the paper "Network Bending of Diffusion Models for Audio-Visual Generation" published at DAFx 2024.

Network Bending of Diffusion Models for Audio-Visual Generation

Luke Dzwonczyk, Carmine-Emanuele Cella, & David Ban

Proceedings of the 27th International Conference on Digital Audio Effects (DAFx) • 2024 • Surrey, UK

A method for modifying the internal structure of diffusion models to create audio-visual art, exploring the intersection of machine learning and creative media generation.

Neural Models for Target-Based Computer-Assisted Musical Orchestration: A Preliminary Study

Carmine-Emanuele Cella, Luke Dzwonczyk, Alejandro Saldarriaga-Fuertes, Hongfu Liu, & Hélène-Camille Crayencour

Journal of Creative Music Systems • 2022

A preliminary investigation into using neural network models for computer-assisted orchestration, focusing on target-based approaches to automatic instrumentation. This is an extended version of the paper "A Study on Neural Models for Target-Based Computer-Assisted Musical Orchestration" published at AIMC 2020.

Source Separation Methods for Computer-assisted Orchestration

Luke Dzwonczyk, Leo Chédin, Alejandro Saldarriaga-Fuertes, Hélène-Camille Crayencour, & Carmine-Emanuele Cella

Proceedings of the 3rd Conference on AI Music Creativity (AIMC) • 2022

An exploration of source separation techniques applied to as a pre-processing step to computer-assisted orchestration, investigating methods for isolating and separately orchestrating different layers of the target sound.

A Study on Neural Models for Target-Based Computer-Assisted Musical Orchestration

Carmine-Emanuele Cella, Luke Dzwonczyk, Alejandro Saldarriaga-Fuertes, Hongfu Liu, & Hélène-Camille Crayencour

Proceedings of the 2020 Joint Conference on AI Music Creativity (AIMC) • 2020

A preliminary investigation into using an end-to-end neural network to perform the task of target-based computer-assisted musical orchestration.