BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070246Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T094000 DTEND;TZID=Australia/Melbourne:20231214T095000 UID:siggraphasia_SIGGRAPH Asia 2023_sess124_papers_381@linklings.com SUMMARY:AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Co llections DESCRIPTION:Technical Papers, TOG\n\nYue Wu (Hong Kong University of Scien ce and Technology), Sicheng Xu (Microsoft Research Asia), Jianfeng Xiang ( Tsinghua University), Fangyun Wei (Microsoft Research Asia), Qifeng Chen ( Hong Kong University of Science and Technology), and Jiaolong Yang and Xin Tong (Microsoft Research Asia)\n\nPrevious animatable 3D-aware GANs for h uman generation have primarily focused on either the human head or full bo dy. However, head-only videos are relatively uncommon in real life, and fu ll body generation typically does not deal with facial expression control and still has challenges in generating high-quality results. Towards appli cable video avatars, we present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and should er movements. It is a generative model trained on unstructured 2D image co llections without using 3D or video data. For the new task, we base our me thod on the generative radiance manifold representation and equip it with learnable facial and head-shoulder deformations. A dual-camera rendering a nd adversarial learning scheme is proposed to improve the quality of the g enerated faces, which is critical for portrait images. A pose deformation processing network is developed to generate plausible deformations for cha llenging regions such as long hair. Experiments show that our method, trai ned on unstructured 2D images, can generate diverse and high-quality 3D po rtraits with desired control over different properties.\n\nRegistration Ca tegory: Full Access\n\nSession Chair: Lin Gao (University of Chinese Acade my of Sciences) URL:https://asia.siggraph.org/2023/full-program?id=papers_381&sess=sess124 END:VEVENT END:VCALENDAR