MakeItTalk: Speaker-Aware Talking-Head Animation 2019-2020
Yang Zhou, X. Han, E. Shechtman, J. Echevarria, E. Kalogerakis, D. Li
ACM SIGGRAPH ASIA, 2020
We present a method that generates expressive talking heads from a single facial image with audio as the only input. Our method first disentangles the content and speaker information in the input audio signal. The audio content robustly controls the motion of lips and nearby facial regions, while the speaker information determines the specifics of facial expressions and the rest of the talking head dynamics. Our method is able to synthesize photorealistic videos of entire talking heads with full range of motion and also animate artistic paintings, sketches, 2D cartoon characters, Japanese mangas, stylized caricatures in a single unified framework.