BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20260114T163633Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_327@linklings.com SUMMARY:An Implicit Physical Face Model Driven by Expression and Style DESCRIPTION:Lingchen Yang (ETH Zürich); Gaspard Zoss and Prashanth Chandra n (The Walt Disney Company (Switzerland) GmbH); Paulo Gotardo (Disney Rese arch Studios, The Walt Disney Company (Switzerland) GmbH); Markus Gross (E TH Zürich, The Walt Disney Company (Switzerland) GmbH); Barbara Solenthale r (ETH Zürich); Eftychios Sifakis (University of Wisconsin Madison); and D erek Bradley (The Walt Disney Company (Switzerland) GmbH)\n\n3D facial ani mation is often produced by manipulating facial deformation models (or rig s), that are traditionally parameterized by expression controls. A key com ponent that is usually overlooked is expression ``style", as in, how a par ticular expression is performed. Although it is common to define a semanti c basis of expressions that characters can perform, most characters perfor m each expression in their own style. To date, style is usually entangled with the expression, and it is not possible to transfer the style of one c haracter to another when considering facial animation. We present a new fa ce model, based on a data-driven implicit neural physics model, that can b e driven by both expression and style separately. At the core, we present a framework for learning implicit physics-based actuations for multiple su bjects simultaneously, trained on a few arbitrary performance capture sequ ences from a small set of identities. Once trained, our method allows gene ralized physics-based facial animation for any of the trained identities, extending to unseen performances. Furthermore, it grants control over the animation style, enabling style transfer from one character to another or blending styles of different characters. Lastly, as a physics-based model, it is capable of synthesizing physical effects, such as collision handlin g, setting our method apart from conventional approaches.\n\nRegistration Category: Full Access, Enhanced Access, Trade Exhibitor, Experience Hall E xhibitor\n\n URL:https://asia.siggraph.org/2023/full-program?id=papers_327&sess=sess209 END:VEVENT END:VCALENDAR