Self-supervised dance video synthesis conditioned on music

Abstract

We present a learning-based approach with pose perceptual loss for automatic music video generation. Our method can produce a realistic dance video that conforms to the beats and rhymes of almost any given music. To achieve this, we firstly generate a human skeleton sequence from music and then apply the learned pose-to-appearance mapping to generate the final video. In the stage of generating skeleton sequences, we utilize two discriminators to capture different aspects of the sequence and propose a novel pose perceptual loss to produce natural dances. Besides, we also provide a new cross-modal evaluation to evaluate the dance quality, which is able to estimate the similarity between two modalities of music and dance. Finally, a user study is conducted to demonstrate that dance video synthesized by the presented approach produces surprisingly realistic results.

Publication
In ACM MM 2020
Zijian Huang
Zijian Huang
Master of Science in Computer Science

My research interests include Machine Learning, Security and Computer Vision. Specifically, I am interested in the robustness of machine learning models and now I am waorking on the robustness of reinforcement learning, supervised by Prof. Bo Li. Also, I have done some CV tasks, such as 3D human pose detection and image/video synthesis.