See also my Google Scholar.
TeaserGen: Generating Teasers for Long Documentaries
Weihan Xu, Paul Pu Liang, Haven Kim, Julian McAuley, Taylor Berg-Kirkpatrick, and Hao-Wen Dong
Under review, 2024
paper
demo
Multimodal Learning
Generating Symbolic Music from Natural Language Prompts using an LLM-Enhanced Dataset
Weihan Xu, Julian McAuley, Taylor Berg-Kirkpatrick, Shlomo Dubnov, and Hao-Wen Dong
Under review, 2024
paper
Music Generation Multimodal Learning
ViolinDiff: Enhancing Expressive Violin Synthesis with Pitch Bend Conditioning
Daewoong Kim, Hao-Wen Dong, and Dasaem Jeon
Under review, 2024
paper
Audio Synthesis Music Performance Rendering
Nested Music Transformer: Sequentially Decoding Compound Tokens in Symbolic Music and Audio Generation
Jiwoo Ryu, Hao-Wen Dong, Jongmin Jung, and Dasaem Jeon
International Society for Music Information Retrieval Conference (ISMIR), 2024
paper
demo
reviews
Music Generation
CLIPSonic: Text-to-Audio Synthesis with Unlabeled Videos and Pretrained Language-Vision Models
Hao-Wen Dong, Xiaoyu Liu, Jordi Pons, Gautam Bhattacharya, Santiago Pascual, Joan Serrà, Taylor Berg-Kirkpatrick, and Julian McAuley
IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2023
paper
demo
video
slides
reviews
Oral presentation Audio Synthesis Multimodal Learning
Multitrack Music Transformer
Hao-Wen Dong, Ke Chen, Shlomo Dubnov, Julian McAuley, and Taylor Berg-Kirkpatrick
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023
paper
demo
video
slides
code
reviews
Oral presentation Music Generation
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
Hao-Wen Dong, Naoya Takahashi, Yuki Mitsufuji, Julian McAuley, and Taylor Berg-Kirkpatrick
International Conference on Learning Representations (ICLR), 2023
paper
demo
video
slides
poster
code
reviews
Sound Separation Multimodal Learning
Improving Choral Music Separation through Expressive Synthesized Data from Sampled Instruments
Ke Chen, Hao-Wen Dong, Yi Luo, Julian McAuley, Taylor Berg-Kirkpatrick, Miller Puckette, and Shlomo Dubnov
International Society for Music Information Retrieval Conference (ISMIR), 2022
paper
demo
code
reviews
Sound Separation
Deep Performer: Score-to-Audio Music Performance Synthesis
Hao-Wen Dong, Cong Zhou, Taylor Berg-Kirkpatrick, and Julian McAuley
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022
paper
demo
video
slides
poster
reviews
Audio Synthesis Music Performance Rendering
Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music
Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley
International Society for Music Information Retrieval Conference (ISMIR), 2021
paper
demo
video
slides
code
reviews
Music Compositional Tools
An Empirical Evaluation of End-to-End Polyphonic Optical Music Recognition
Sachinda Edirisooriya, Hao-Wen Dong, Julian McAuley and Taylor Berg-Kirkpatrick
International Society for Music Information Retrieval Conference (ISMIR), 2021
paper
code
reviews
Optical Music Recognition
MusPy: A Toolkit for Symbolic Music Generation
Hao-Wen Dong, Ke Chen, Julian McAuley, and Taylor Berg-Kirkpatrick
International Society for Music Information Retrieval Conference (ISMIR), 2020
paper
video
slides
poster
code
documentation
reviews
Infrastructure
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation
Hao-Wen Dong and Yi-Hsuan Yang
International Society for Music Information Retrieval Conference (ISMIR), 2018
paper
demo
video
slides
poster
code
reviews
Music Generation
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Hao-Wen Dong,* Wen-Yi Hsiao,* Li-Chia Yang, and Yi-Hsuan Yang (*equal contribution)
AAAI Conference on Artificial Intelligence (AAAI), 2018
paper
demo
slides
code
Oral presentation Music Generation
Equipping Pretrained Unconditional Music Transformers with Instrument and Genre Controls
Weihan Xu, Julian McAuley, Shlomo Dubnov, and Hao-Wen Dong
IEEE Big Data Workshop on AI Music Generation (AIMG), 2023
paper
demo
reviews
AI Music Innovation Award Music Generation
CLIPSynth: Learning Text-to-audio Synthesis from Videos using CLIP and Diffusion Models
Hao-Wen Dong, Gunnar A. Sigurdsson, Chenyang Tao, Jiun-Yu Kao, Yu-Hsiang Lin, Anjali Narayan-Chen, Arpit Gupta, Tagyoung Chung, Jing Huang, Nanyun Peng, and Wenbo Zhao
CVPR Workshop on Sight and Sound (WSS), 2023
paper
demo
video
slides
Audio Synthesis Multimodal Learning
A New Dataset for Tag- and Text-based Controllable Symbolic Music Generation
Weihan Xu, Julian McAuley, Taylor Berg-Kirkpatrick, Shlomo Dubnov, and Hao-Wen Dong
ISMIR Late-Breaking Demos, 2024
paper
demo
Music Generation Multimodal Learning
Pypianoroll: Open Source Python Package for Handling Multitrack Pianorolls
Hao-Wen Dong, Wen-Yi Hsiao, and Yi-Hsuan Yang
ISMIR Late-Breaking Demos, 2018
paper
poster
code
documentation
Infrastructure
MuseGAN: Demonstration of a Convolutional GAN Based Model for Generating Multi-track Piano-rolls
Hao-Wen Dong,* Wen-Yi Hsiao,* Li-Chia Yang, and Yi-Hsuan Yang (*equal contribution)
ISMIR Late-Breaking Demos, 2017
paper
poster
Music Generation
On Output Activation Functions for Adversarial Losses: A Theoretical Analysis via Variational Divergence Minimization and An Empirical Study on MNIST Classification
Hao-Wen Dong and Yi-Hsuan Yang
arXiv preprint arXiv:1901.08753, 2019
paper
demo
code
Fundamental Machine Learning
Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation
Hao-Wen Dong and Yi-Hsuan Yang
arXiv preprint arXiv:1810.04714, 2018
paper
demo
slides
code
Fundamental Machine Learning