BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070250Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231215T131500 DTEND;TZID=Australia/Melbourne:20231215T132500 UID:siggraphasia_SIGGRAPH Asia 2023_sess137_papers_327@linklings.com SUMMARY:An Implicit Physical Face Model Driven by Expression and Style DESCRIPTION:Technical Communications, Technical Papers\n\nLingchen Yang (E TH Zürich); Gaspard Zoss and Prashanth Chandran (The Walt Disney Company ( Switzerland) GmbH); Paulo Gotardo (Disney Research Studios, The Walt Disne y Company (Switzerland) GmbH); Markus Gross (ETH Zürich, The Walt Disney C ompany (Switzerland) GmbH); Barbara Solenthaler (ETH Zürich); Eftychios Si fakis (University of Wisconsin Madison); and Derek Bradley (The Walt Disne y Company (Switzerland) GmbH)\n\n3D facial animation is often produced by manipulating facial deformation models (or rigs), that are traditionally p arameterized by expression controls. A key component that is usually overl ooked is expression ``style", as in, how a particular expression is perfor med. Although it is common to define a semantic basis of expressions that characters can perform, most characters perform each expression in their o wn style. To date, style is usually entangled with the expression, and it is not possible to transfer the style of one character to another when con sidering facial animation. We present a new face model, based on a data-dr iven implicit neural physics model, that can be driven by both expression and style separately. At the core, we present a framework for learning imp licit physics-based actuations for multiple subjects simultaneously, train ed on a few arbitrary performance capture sequences from a small set of id entities. Once trained, our method allows generalized physics-based facial animation for any of the trained identities, extending to unseen performa nces. Furthermore, it grants control over the animation style, enabling st yle transfer from one character to another or blending styles of different characters. Lastly, as a physics-based model, it is capable of synthesizi ng physical effects, such as collision handling, setting our method apart from conventional approaches.\n\nRegistration Category: Full Access\n\nSes sion Chair: Weidan Xiong (Shenzhen University) URL:https://asia.siggraph.org/2023/full-program?id=papers_327&sess=sess137 END:VEVENT END:VCALENDAR