Abstract

In computer vision, human pose synthesis and transfer deal with probabilistic image generation of a person in a previously unseen pose from an already available observation of that person. Though researchers have recently proposed several methods to achieve this task, most of these techniques derive the target pose directly from the desired target image on a specific dataset, making the underlying process challenging to apply in real-world scenarios as the generation of the target image is the actual aim. In this paper, we first present the shortcomings of current pose transfer algorithms and then propose a novel text-based pose transfer technique to address those issues. We divide the problem into three independent stages: (a) text to pose representation, (b) pose refinement, and (c) pose rendering. To the best of our knowledge, this is one of the first attempts to develop a text-based pose transfer framework where we also introduce a new dataset DF-PASS, by adding descriptive pose annotations for the images of the DeepFashion dataset. The proposed method generates promising results with significant qualitative and quantitative scores in our experiments.

Network Architecture



Click on the diagram for a zoomed view of the network architecture (Opens a new tab).

The workflow is divided into three stages. In stage 1, we estimate a spatial representation \(K^*_B\) for the target pose \(P_B\) from the corresponding text description embedding \(v_B\). In stage 2, we regressively refine the initial estimation of the facial keypoints to obtain the refined target keypoints \(\tilde{K}^*_B\). Finally, in stage 3, we render the target image \(\tilde{I}_B\) by conditioning the pose transfer on the source image \(I_A\) having the keypoints \(K_A\) corresponding to the source pose \(P_A\).

Generation Results



Keypoints-guided methods tend to produce structurally inaccurate results when the physical appearance of the target pose reference significantly differs from the condition image. This observation is more frequent for the out of distribution target poses than the within distribution target poses. On the other hand, the existing text-guided method occasionally misinterprets the target pose due to a limited set of basic poses used for pose representation. The proposed text-guided technique successfully addresses these issues while retaining the ability to generate visually decent results close to the keypoints-guided baseline.

Citation

@inproceedings{roy2022tips,
  title     = {TIPS: Text-Induced Pose Synthesis},
  author    = {Roy, Prasun and Ghosh, Subhankar and Bhattacharya, Saumik and Pal, Umapada and Blumenstein, Michael},
  booktitle = {The European Conference on Computer Vision (ECCV)},
  month     = {October},
  year      = {2022}
}

Video Presentation

Poster Presentation

News and Updates

  Jul 25, 2022

We have released our paper, supplementary materials, code, datasets and pretrained models.
Star Fork

  Jul 4, 2022

Our paper is accepted in ECCV 2022.
More details about the code and datasets will be released soon.

On Twitter

Copyright 2022 by the authors | Made with on Earth.