[IEEE TPAMI] Locally Connected Network for Monocular 3D Human Pose Estimation
We present an approach to estimate 3D human pose from a monocular image. The method consists of two steps: it first estimates a 2D pose from an image and then recovers the corresponding 3D pose. This work focuses on the second step. The Graph Convolutional Network (GCN) has recently become the de facto standard for human pose related tasks such as action recognition. However, in this work, we show that it has critical limitations when used for 3D pose estimation due to the inherent weight sharing scheme. The limitations are clearly exposed through a more generic reformulation of GCN, in which both GCN and Fully Connected Network (FCN) are its special cases. In addition, on top of the formulation, we propose Locally Connected Network (LCN) to overcome the limitations of GCN by allocating dedicated rather than shared filters for different joints. We train the LCN network together with the 2D pose estimator such that LCN can be trained to handle inaccurate 2D poses. We evaluate our approach on two benchmarks, and observe that LCN outperforms GCN, FCN and the state-of-the-arts by a large margin. More importantly, it demonstrates stronger cross-dataset generalization ability because of the sparse connections among joints.
The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.