HiFiVFS: High Fidelity Video Face Swapping

Xu Chen*,1, Keke He*,1, Junwei Zhu†,1, Yanhao Ge2, Wei Li2, Chengjie Wang1
1Tencent, 2VIVO
*Indicates Equal Contribution Corresponding Author

Abstract

Face swapping aims to generate results that combine the identity from the source with attributes from the target. Existing methods primarily focus on image-based face swapping. When processing videos, each frame is handled independently, making it difficult to ensure temporal stability. From a model perspective, face swapping is gradually shifting from generative adversarial networks (GANs) to diffusion models (DMs), as DMs have been shown to possess stronger generative capabilities. Current diffusion-based approaches often employ inpainting techniques, which struggle to preserve fine-grained attributes like lighting and makeup. To address these challenges, we propose a high fidelity video face swapping (HiFiVFS) framework, which leverages the strong generative capability and temporal prior of Stable Video Diffusion (SVD). We build a fine-grained attribute module to extract identity-disentangled and fine-grained attribute features through identity desensitization and adversarial learning. Additionally, We introduce detailed identity injection to further enhance identity similarity. Extensive experiments demonstrate that our method achieves state-of-the-art (SOTA) in video face swapping, both qualitatively and quantitatively.

Comparisons in the wild

Method Overview

HiFiVFS

Pipeline of our proposed HiFiVFS, including training and inference phases. HiFiVFS is primarily trained based on the SVD framework, utilizing multi-frame input and a temporal attention to ensure the stability of the generated videos. In the training phase, HiFiVFS introduces fine-grained attribute learning (FAL) and detailed identity learning (DIL). In FAL, attribute disentanglement and enhancement are achieved through identity desensitization and adversarial learning. DIL uses more face swapping suited ID features to further boost identity similarity. In the inference phase, FAL only retains Eatt for attribute extraction, making the testing process more convenient. It is noted that HiFiVFS is trained and tested in the latent space, but for visualization purposes, we illustrate all processes in the original image space.

Comparisons on VFHQ-FS

Comparisons on FF++