I am an Assistant Professor in the Department of Performing Arts Technology at University of Michigan. I am also affiliated with the Computer Science and Engineering Department.
My research aims to augment human creativity with machine learning. I develop human-centered generative AI technology that can be integrated into the professional creative workflow, with a focus on music, audio and video content creation. My long-term goal is to lower the barrier of entry for content creation and democratize professional content creation for everyone.
My research on Human-Centered Generative AI for Content Creation can be categorized into the following three main pillars:
My current research interests include:
Researchers create knowledge.
Teachers organize knowledge.
Engineers apply knowledge.
For prospective students, please read this if you’re interested in working with me.
See the full list of pulications here (Google Scholar).
CLIPSonic: Text-to-Audio Synthesis with Unlabeled Videos and Pretrained Language-Vision Models
Hao-Wen Dong, Xiaoyu Liu, Jordi Pons, Gautam Bhattacharya, Santiago Pascual, Joan Serrà, Taylor Berg-Kirkpatrick, and Julian McAuley
IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2023
paper
demo
video
slides
reviews
Multitrack Music Transformer
Hao-Wen Dong, Ke Chen, Shlomo Dubnov, Julian McAuley, and Taylor Berg-Kirkpatrick
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023
paper
demo
video
slides
code
reviews
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Hao-Wen Dong, Naoya Takahashi, Yuki Mitsufuji, Julian McAuley, and Taylor Berg-Kirkpatrick
International Conference on Learning Representations (ICLR), 2023
paper
demo
video
slides
poster
code
reviews
Deep Performer: Score-to-Audio Music Performance Synthesis
Hao-Wen Dong, Cong Zhou, Taylor Berg-Kirkpatrick, and Julian McAuley
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022
paper
demo
video
slides
poster
reviews
Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music
Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley
International Society for Music Information Retrieval Conference (ISMIR), 2021
paper
demo
video
slides
code
reviews
MusPy: A Toolkit for Symbolic Music Generation
Hao-Wen Dong, Ke Chen, Julian McAuley, and Taylor Berg-Kirkpatrick
International Society for Music Information Retrieval Conference (ISMIR), 2020
paper
video
slides
poster
code
documentation
reviews
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation
Hao-Wen Dong and Yi-Hsuan Yang
International Society for Music Information Retrieval Conference (ISMIR), 2018
paper
demo
video
slides
poster
code
reviews
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Hao-Wen Dong,* Wen-Yi Hsiao,* Li-Chia Yang, and Yi-Hsuan Yang (*equal contribution)
AAAI Conference on Artificial Intelligence (AAAI), 2018
paper
demo
slides
code
For more information, please see my CV.