BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163705Z
LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231213T174500
DTEND;TZID=Australia/Melbourne:20231213T182900
UID:siggraphasia_SIGGRAPH Asia 2023_sess127@linklings.com
SUMMARY:Beyond Skin Deep
DESCRIPTION:LitNeRF: Intrinsic Radiance Decomposition for High-Quality Vie
 w Synthesis and Relighting of Faces\n\nHigh-fidelity, photorealistic 3D ca
 pture of a human face is a long-standing problem in computer graphics -- t
 he complex material of skin, intricate geometry of hair, and fine scale te
 xtural details make it challenging. Traditional techniques rely on very la
 rge and expensive capture rigs to reconstru...\n\n\nKripasindhu Sarkar (Go
 ogle Inc.); Marcel Bühler and Simon Li (ETH Zürich, Google Inc.); and Daoy
 e Wang, Delio Vicini, Jérémy Riviere, Yinda Zhang, Sergio Orts-Escolano, P
 aulo Gotardo, Thabo Beeler, and Abhimitra Meka (Google Inc.)\n------------
 ---------\nMapping and Recognition of Facial Expressions on Another Person
 's Look-Alike Avatar\n\nAnalyzes facial expressions and eye movements from
  three actors mapped onto one look-alike avatar, aiming to enhance VR real
 ism and study the effects on identification and recognition.\n\n\nBirate S
 onia (University of Virginia), Trinity Suma (Columbia University), and Kwa
 me Agyemang and Oyewole Oyekoya (Hunter College)\n---------------------\nE
 motional Speech-Driven Animation with Content-Emotion Disentanglement\n\nT
 o be widely adopted, 3D facial avatars need to be animated easily, realist
 ically, and directly, from speech signals. While the best recent methods g
 enerate 3D animations that are synchronized with the input audio, they lar
 gely ignore the impact of emotions on facial expressions. Instead, their f
 ocu...\n\n\nRadek Daněček (Max Planck Institute for Intelligent Systems); 
 Kiran Chhatre (KTH Royal Institute of Technology); Shashank Tripathi, Yand
 ong Wen, and Michael Black (Max Planck Institute for Intelligent Systems);
  and Timo Bolkart (Max Planck Institut for Intelligent Systems)\n---------
 ------------\nPortrait Expression Editing With Mobile Photo Sequence\n\nWe
  introduce ExShot, a system that allows high-quality portrait expression e
 diting from mobile photo sequences to extract expression information and e
 nsure editing qualities.\n\n\nYiqin Zhao (Worcester Polytechnic Institute)
 ; Rohit Pandey, Yinda Zhang, Ruofei Du, Feitong Tan, and Chetan Ramaiah (G
 oogle Inc.); Tian Guo (Worcester Polytechnic Institute); and Sean Fanello 
 (Google Inc.)\n---------------------\nEfficient Incremental Potential Cont
 act for Actuated Face Simulation\n\nWe present a simulator for face animat
 ion driven by muscle actuation. Through the integration of IPC and our pro
 posed optimization strategies, we can efficiently simulate challenging exp
 ressions with rich contact.\n\n\nBo Li, Lingchen Yang, and Barbara Solenth
 aler (ETH Zürich)\n\nRegistration Category: Full Access\n\nSession Chair: 
 Jernej Barbic (University of Southern California)
END:VEVENT
END:VCALENDAR
