I am a PhD student in Artificial Intelligence and Music at Queen Mary University of London. My research bridges neural audio synthesis, the psychoacoustics of musical timbre, and applications of deep generative models to audio. I am very grateful to be funded by the UKRI Centre for Doctoral Training in Artificial Intelligence and Music, and am a member of the Centre for Digital Music.
You can contact me at b.j.hayes (at) qmul.ac.uk
- ISMIRNeural Waveshaping SynthesisIn Proceedings of the 22nd International Society for Music Information Retrieval Conference 2021
- ICMPCPerceptual and semantic scaling of FM synthesis timbres: Common dimensions and the role of expertiseIn 16th International Conference on Music Perception and Cognition 2021
- DMRNPerceptual Similarities in Neural Timbre EmbeddingsIn DMRN+15: Digital Music Research Network One-Day Workshop 2020 2020
- TimbreThere’s More to Timbre than Musical Instruments: Semantic Dimensions of FM SoundsIn Proceedings of the 2nd International Conference on Timbre 2020
- TimbreEvidence for Timbre Space Robustness to an Uncontrolled Online Stimulus PresentationIn Proceedings of the 2nd International Conference on Timbre 2020