I am a PhD student in Artificial Intelligence and Music at Queen Mary University of London’s Centre for Digital Music. My research currently focuses on differentiable digital signal processing for deep learning based control over audio synthesisers and effects. I am particularly interested in how we might overcome the optimisation pathologies that currently limit this methodology. My broader research interests include audio synthesis, symmetry, meta-learning, timbre perception, and cross-modal interactions. I am jointly supervised by Dr Charalampos Saitis and Dr György Fazekas, and am very grateful to be funded by the UKRI Centre for Doctoral Training in Artificial Intelligence and Music.
Previously, I was Music Lead at the award-winning AI-driven generative music startup Jukedeck, and was a research intern with ByteDance’s Speech, Audio & Music Intelligence (SAMI) team. I also make music and taught undergraduate Electronic and Produced music at the Guildhall School of Music and Drama.
I am particularly open to collaborations with musicians and artists looking to apply artificial intelligence to their work, engineers interested in building new musical tools, and researchers working on related topics.
You can contact me at b.j.hayes (at) qmul.ac.uk
- ICLRThe Responsibility Problem in Neural Networks with Unordered TargetsIn The First Tiny Papers Track at ICLR 2023, Tiny Papers @ ICLR 2023, Kigali, Rwanda, May 5, 2023 2023
- ICASSPSinusoidal Frequency Estimation by Gradient DescentIn ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
- JAESDisembodied Timbres: A Study on Semantically Prompted FM SynthesisIn Journal of the Audio Engineering Society 2022
- ISMIRNeural Waveshaping SynthesisIn Proceedings of the 22nd International Society for Music Information Retrieval Conference 2021