In computer vision, human pose synthesis and transfer deal with probabilistic image generation of a person in a previously unseen pose from an already available observation of that person. Though researchers have recently proposed several methods to achieve this task, most of these techniques derive the target pose directly from the desired target image on a specific dataset, making the underlying process challenging to apply in real-world scenarios as the generation of the target image is the actual aim. In this paper, we first present the shortcomings of current pose transfer algorithms and then propose a novel text-based pose transfer technique to address those issues. We divide the problem into three independent stages: (a) text to pose representation, (b) pose refinement, and (c) pose rendering. To the best of our knowledge, this is one of the first attempts to develop a text-based pose transfer framework where we also introduce a new dataset DF-PASS, by adding descriptive pose annotations for the images of the DeepFashion dataset. The proposed method generates promising results with significant qualitative and quantitative scores in our experiments.
Click on the diagram for a zoomed view of the network architecture (Opens a new tab).
The workflow is divided into three stages. In stage 1, we estimate a spatial representation \(K^*_B\) for the target pose \(P_B\) from the corresponding text description embedding \(v_B\). In stage 2, we regressively refine the initial estimation of the facial keypoints to obtain the refined target keypoints \(\tilde{K}^*_B\). Finally, in stage 3, we render the target image \(\tilde{I}_B\) by conditioning the pose transfer on the source image \(I_A\) having the keypoints \(K_A\) corresponding to the source pose \(P_A\).
Keypoints-guided methods tend to produce structurally inaccurate results when the physical appearance of the target pose reference significantly differs from the condition image. This observation is more frequent for the out of distribution target poses than the within distribution target poses. On the other hand, the existing text-guided method occasionally misinterprets the target pose due to a limited set of basic poses used for pose representation. The proposed text-guided technique successfully addresses these issues while retaining the ability to generate visually decent results close to the keypoints-guided baseline.
@inproceedings{roy2022tips, title = {TIPS: Text-Induced Pose Synthesis}, author = {Roy, Prasun and Ghosh, Subhankar and Bhattacharya, Saumik and Pal, Umapada and Blumenstein, Michael}, booktitle = {The European Conference on Computer Vision (ECCV)}, month = {October}, year = {2022} }
Happy to share our #ECCV2022 work "TIPS: Text-Induced Pose Synthesis"!
— Prasun Roy (@_prasunroy) July 26, 2022
arXiv - https://t.co/bU8KI0i0ED
Project - https://t.co/XM2pQaZEEP
Thrilled 🤩 to have our 1/1 paper accepted in #ECCV2022.
— Prasun Roy (@_prasunroy) July 4, 2022
Congratulations to my amazing teammates Saumik and Subhankar. Many thanks to my advisors Prof Blumenstein and Prof Pal.
More on the paper coming soon!@UTSEngage @UTSFEIT @ProfBlumenstein
Copyright 2022 by the authors | Made with on Earth.