BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070240Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_327@linklings.com SUMMARY:An Implicit Physical Face Model Driven by Expression and Style DESCRIPTION:Technical Papers\n\nLingchen Yang (ETH Zürich); Gaspard Zoss a nd Prashanth Chandran (The Walt Disney Company (Switzerland) GmbH); Paulo Gotardo (Disney Research Studios, The Walt Disney Company (Switzerland) Gm bH); Markus Gross (ETH Zürich, The Walt Disney Company (Switzerland) GmbH) ; Barbara Solenthaler (ETH Zürich); Eftychios Sifakis (University of Wisco nsin Madison); and Derek Bradley (The Walt Disney Company (Switzerland) Gm bH)\n\n3D facial animation is often produced by manipulating facial deform ation models (or rigs), that are traditionally parameterized by expression controls. A key component that is usually overlooked is expression ``styl e", as in, how a particular expression is performed. Although it is common to define a semantic basis of expressions that characters can perform, mo st characters perform each expression in their own style. To date, style i s usually entangled with the expression, and it is not possible to transfe r the style of one character to another when considering facial animation. We present a new face model, based on a data-driven implicit neural physi cs model, that can be driven by both expression and style separately. At t he core, we present a framework for learning implicit physics-based actuat ions for multiple subjects simultaneously, trained on a few arbitrary perf ormance capture sequences from a small set of identities. Once trained, ou r method allows generalized physics-based facial animation for any of the trained identities, extending to unseen performances. Furthermore, it gran ts control over the animation style, enabling style transfer from one char acter to another or blending styles of different characters. Lastly, as a physics-based model, it is capable of synthesizing physical effects, such as collision handling, setting our method apart from conventional approach es.\n\nRegistration Category: Full Access, Enhanced Access, Trade Exhibito r, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_327&sess=sess209 END:VEVENT END:VCALENDAR