- Tài khoản và mật khẩu chỉ cung cấp cho sinh viên, giảng viên, cán bộ của TRƯỜNG ĐẠI HỌC FPT
- Hướng dẫn sử dụng:
Xem Video
.
- Danh mục tài liệu mới:
Tại đây
.
-
Đăng nhập
:
Tại đây
.
Computer Science Generative adversarial network Face Swapping
Issue Date:
2021
Publisher:
FPTU Hà Nội
Abstract:
Many current face swapping works are achieving state-of-the-art results with realistic face-looking performance. However, the developer of these works does not completely open-source the code or release only the inference section of the code, making it challenging for the community to replicate the results as shown in these articles. In this work, we going through the process of developing recent advances in using Generative Adversarial Network to solve face manipulation problems. To this end, we studied a novel framework of high fidelity face swapping method, namely FaceShifter.Different from earlier works that only a limited amount of information captured from the target image was used to to synthesize swapped face, our framework is capable of producing high fidelity swapped faces by attentionally utilizing and integrating the target face information into the output image. Based on previous works, we develop a novel multi-level face attributes encoder to efficiently exploit attribute latent vector representation. Besides, we propose a new generator approach, by dynamically incorporate information between identity and attribute features using novel carefully built Adaptive Attentional Denormalization (AAD) layers for robust image generation. Various experiments on a wide range of wild faces datasets indicate that our framework synthesis is not only considering more realistic and aesthetically pleasing but also involving network structure generality, architecture scalability as well as more stable in maintaining identity’s features in comparison to other state-of-the-art techniques. By utilizing image-to-image translation across domains, our methods could be trained without subject-specific annotations, facilitate the network for further research and applications. We also publish the code used for training and testing so that it can be used by anyone for research purpose and open-source communities.