BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023309Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241203T135800 DTEND;TZID=Asia/Tokyo:20241203T140900 UID:siggraphasia_SIGGRAPH Asia 2024_sess105_tog_107@linklings.com SUMMARY:Identity-Preserving Face Swapping via Dual Surrogate Generative Mo dels DESCRIPTION:Technical Papers\n\nZiyao Huang and Fan Tang (Institute of Com puting Technology, Chinese Academy of Sciences); Yong Zhang (Tencent); Jua n Cao, Chengyu Li, Sheng Tang, and Jintao Li (Institute of Computing Techn ology, Chinese Academy of Sciences); and Tong-Yee Lee (National Cheng Kung University)\n\nIn this study, we revisit the fundamental setting of face- swapping models and reveal that only using implicit supervision for traini ng leads to the difficulty of advanced methods to preserve the source iden tity. We propose a novel reverse pseudo-input generation approach to offer supplemental data for training face-swapping models, which addresses the aforementioned issue. Unlike the traditional pseudo-label-based training s trategy, we assume that arbitrary real facial images could serve as the gr ound-truth outputs for the face-swapping network and try to generate corre sponding input pair data. Specifically, we involve a sou rce-creating surrogate that alters the attributes of the real image while keeping the identity, and a target-creating surrogate intends to synthesiz e attribute-preserved target images with different identities. Our framewo rk, which utilizes proxy-paired data as explicit supervision to direct the face-swapping training process, partially fulfills a credible and effecti ve optimization direction to boost the identity-preserving capability. We design explicit and implicit adaption strategies to better approximate the explicit supervision for face swapping.\nQuantitative and qualitative exp eriments on FF++, FFHQ, and wild images show that our framework could impr ove the performance of various face-swapping pipelines in terms of visual fidelity and ID preserving. Furthermore, we display applications with our method on re-aging, swappable attribute customization, cross-domain, and v ideo face swapping. Code is available under https://github.com/ICTMCG/CSCS .\n\nRegistration Category: Full Access, Full Access Supporter\n\nLanguage Format: English Language\n\nSession Chair: Kfir Aberman (Snap) URL:https://asia.siggraph.org/2024/program/?id=tog_107&sess=sess105 END:VEVENT END:VCALENDAR