• Hi!
    I'm Yang Zhou

    I'm a 4th year CS PhD student in the Computer Graphics Research Group at UMass Amherst, advised by Prof. Evangelos Kalogerakis. I work in the areas of computer graphics and machine learning. In particular, I am interested in using deep learning techniques to help artists, stylists and animators to make better design. I obtained my master's degree from Georgia Institute of Technology and my master & bachelor's degree from Shanghai Jiao Tong University, advised by Prof. Weiyao Lin.

    Download CV

Academia

Publications

SceneGraphNet: Neural Message Passing for 3D Indoor Scene Augmentation 2018-2019

Yang Zhou, Z. While, E. Kalogerakis
International Conference Computer Vision (ICCV), 2019

We propose a neural message passing approach to augment an input 3D indoor scene with new objects matching their surroundings. Given an input, potentially incomplete, 3D scene and a query location, our method predicts a probability distribution over object types that fit well in that location. Our distribution is predicted though passing learned messages in a dense graph whose nodes represent objects in the input scene and edges represent spatial and structural relationships.

[Project Page] [Paper] [Code]

Predicting Animation Skeletons for 3D Articulated Models via Volumetric Nets 2018-2019

Z. Xu, Yang Zhou, E. Kalogerakis, K. Singh
International Conference on 3D Vision (3DV) 2019

We present a learning method for predicting animation skeletons for input 3D models of articulated characters. In contrast to previous approaches that fit pre-defined skeleton templates or predict fixed sets of joints, our method produces an animation skeleton tailored for the structure and geometry of the input 3D model.

VisemeNet: Audio-Driven Animator-Centric Speech Animation 2016-2018

Yang Zhou, Z. Xu, C. Landreth, S. Maji, E. Kalogerakis, K. Singh
ACM SIGGRAPH, 2018 (also appears on ACM Trans. on Graphics, 37(4), 2018)

We present a novel deep-learning based approach to producing animator-centric speech motion curves that drive a JALI or standard FACS-based production face-rig, directly from input audio.

[Project Page] [Paper] [Code]

Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55 2017

L. Yi, L. Shao, M. Savva, H. Huang, Yang Zhou, et al.
International Conference Computer Vision Workshop (ICCVW), 2017

ShapeNet is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. We collaborate with ShapeNet team in helping building the training and testing dataset of “Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55”. In particular, we help check the geometry duplicates in ShapeNet Core dataset.

[3D Shape Reconstruction and Segmentation Task Page] [Paper] [ShapeNet Duplicate Check]

A Tube-and-Droplet-based Approach for Representing and Analyzing Motion Trajectories 2014-2016

W. Lin, Yang Zhou, H. Xu, J. Yan, M. Xu, J. Wu, Z. Liu
IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 39(8), pp. 1489-1503, 2017

We address the problem of representing motion trajectories in a highly informative way, and consequently utilize it for analyzing trajectories. We apply our tube-and-droplet representation to trajectory analysis applications including trajectory clustering, trajectory classification & abnormality detection, and 3D action recognition.

[Project Page] [Paper] [Dataset] [Code]

Unsupervised Trajectory Clustering via Adaptive Multi-Kernel-based Shrinkage 2014-2015

H. Xu, Yang Zhou, W. Lin, H. Zha
International Conference Computer Vision (ICCV), pp. 4328-4336, 2015

We introduce an adaptive multi-kernel-based estimation process to estimate the 'shrunk' positions and speeds of trajectories' points. This kernel-based estimation effectively leverages both multiple structural information within a trajectory and the local motion patterns across multiple trajectories, such that the discrimination of the shrunk point can be properly increased.

[Paper]

Representing and recognizing motion trajectories: a tube and droplet approach 2013-2014

Yang Zhou, W. Lin, H. Su, J. Wu, J. Wang, Y. Zhou
ACM Intl. Conf. on Multimedia (MM), pp. 1077-1080. 2014

This paper addresses the problem of representing and recognizing motion trajectories. We propose a 3D tube which can effectively embed both motion and scene-related information of a motion trajectory and a droplet-based method which can suitably catch the characteristics of the 3D tube for activity recognition.

[Paper]

Work

Experience

Adobe, Inc | Creative Intelligence Lab

June, 2019 | Research Intern

Collaborate with researchers on facial animations and lip-sync technologies based on deep learning approaches.

Wayfair, Inc | Wayfair Next Research

June, 2018 | Research Intern

Working on 3D scene systhesis based on deep learning approaches.

NetEase Game, Inc

June, 2015 | Management Trainee

Working on mobile game design, especially on profit models and user-experiences.

Best way to

Contact Me

Best way to reach me is to send an Email